首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
The Language of Infographics: Toward Understanding Conceptual Metaphor Use in Scientific Storytelling 信息图表的语言:了解科学故事中概念隐喻的使用
Pub Date : 2024-09-17 DOI: 10.1109/TVCG.2024.3456327
Hana Pokojná;Tobias Isenberg;Stefan Bruckner;Barbora Kozlíková;Laura Garrison
We apply an approach from cognitive linguistics by mapping Conceptual Metaphor Theory (CMT) to the visualization domain to address patterns of visual conceptual metaphors that are often used in science infographics. Metaphors play an essential part in visual communication and are frequently employed to explain complex concepts. However, their use is often based on intuition, rather than following a formal process. At present, we lack tools and language for understanding and describing metaphor use in visualization to the extent where taxonomy and grammar could guide the creation of visual components, e.g., infographics. Our classification of the visual conceptual mappings within scientific representations is based on the breakdown of visual components in existing scientific infographics. We demonstrate the development of this mapping through a detailed analysis of data collected from four domains (biomedicine, climate, space, and anthropology) that represent a diverse range of visual conceptual metaphors used in the visual communication of science. This work allows us to identify patterns of visual conceptual metaphor use within the domains, resolve ambiguities about why specific conceptual metaphors are used, and develop a better overall understanding of visual metaphor use in scientific infographics. Our analysis shows that ontological and orientational conceptual metaphors are the most widely applied to translate complex scientific concepts. To support our findings we developed a visual exploratory tool based on the collected database that places the individual infographics on a spatio-temporal scale and illustrates the breakdown of visual conceptual metaphors.
我们运用认知语言学的方法,将概念隐喻理论(CMT)映射到可视化领域,以解决科学信息图表中经常使用的视觉概念隐喻的模式问题。隐喻在视觉交流中扮演着重要角色,经常被用来解释复杂的概念。然而,隐喻的使用往往基于直觉,而不是遵循正式的流程。目前,我们缺乏工具和语言来理解和描述可视化中隐喻的使用,以至于分类法和语法无法指导视觉组件(如信息图)的创建。我们对科学表述中的视觉概念映射进行的分类是基于现有科学信息图表中视觉组件的细分。我们通过详细分析从四个领域(生物医学、气候、空间和人类学)收集到的数据,展示了这一映射的发展,这四个领域代表了科学视觉传播中使用的各种视觉概念隐喻。这项工作使我们能够确定这些领域中视觉概念隐喻的使用模式,解决为什么要使用特定概念隐喻的模糊问题,并更好地全面了解科学信息图表中视觉隐喻的使用情况。我们的分析表明,在翻译复杂的科学概念时,本体概念隐喻和方向概念隐喻的应用最为广泛。为了支持我们的研究结果,我们根据收集到的数据库开发了一个可视化探索工具,将各个信息图表置于时空尺度上,并说明视觉概念隐喻的细分情况。
{"title":"The Language of Infographics: Toward Understanding Conceptual Metaphor Use in Scientific Storytelling","authors":"Hana Pokojná;Tobias Isenberg;Stefan Bruckner;Barbora Kozlíková;Laura Garrison","doi":"10.1109/TVCG.2024.3456327","DOIUrl":"10.1109/TVCG.2024.3456327","url":null,"abstract":"We apply an approach from cognitive linguistics by mapping Conceptual Metaphor Theory (CMT) to the visualization domain to address patterns of visual conceptual metaphors that are often used in science infographics. Metaphors play an essential part in visual communication and are frequently employed to explain complex concepts. However, their use is often based on intuition, rather than following a formal process. At present, we lack tools and language for understanding and describing metaphor use in visualization to the extent where taxonomy and grammar could guide the creation of visual components, e.g., infographics. Our classification of the visual conceptual mappings within scientific representations is based on the breakdown of visual components in existing scientific infographics. We demonstrate the development of this mapping through a detailed analysis of data collected from four domains (biomedicine, climate, space, and anthropology) that represent a diverse range of visual conceptual metaphors used in the visual communication of science. This work allows us to identify patterns of visual conceptual metaphor use within the domains, resolve ambiguities about why specific conceptual metaphors are used, and develop a better overall understanding of visual metaphor use in scientific infographics. Our analysis shows that ontological and orientational conceptual metaphors are the most widely applied to translate complex scientific concepts. To support our findings we developed a visual exploratory tool based on the collected database that places the individual infographics on a spatio-temporal scale and illustrates the breakdown of visual conceptual metaphors.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"371-381"},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations 针对文本空间化的潜在嵌入和降维的大规模敏感性分析
Pub Date : 2024-09-17 DOI: 10.1109/TVCG.2024.3456308
Daniel Atzberger;Tim Cech;Willy Scheibel;Jürgen Döllner;Michael Behrisch;Tobias Schreck
The semantic similarity between documents of a text corpus can be visualized using map-like metaphors based on two-dimensional scatterplot layouts. These layouts result from a dimensionality reduction on the document-term matrix or a representation within a latent embedding, including topic models. Thereby, the resulting layout depends on the input data and hyperparameters of the dimensionality reduction and is therefore affected by changes in them. Furthermore, the resulting layout is affected by changes in the input data and hyperparameters of the dimensionality reduction. However, such changes to the layout require additional cognitive efforts from the user. In this work, we present a sensitivity study that analyzes the stability of these layouts concerning (1) changes in the text corpora, (2) changes in the hyperparameter, and (3) randomness in the initialization. Our approach has two stages: data measurement and data analysis. First, we derived layouts for the combination of three text corpora and six text embeddings and a grid-search-inspired hyperparameter selection of the dimensionality reductions. Afterward, we quantified the similarity of the layouts through ten metrics, concerning local and global structures and class separation. Second, we analyzed the resulting 42 817 tabular data points in a descriptive statistical analysis. From this, we derived guidelines for informed decisions on the layout algorithm and highlight specific hyperparameter settings. We provide our implementation as a Git repository at hpicgs/Topic-Models-and-Dimensionality-Reduction-Sensitivity-Study and results as Zenodo archive at DOI:10.5281/zenodo.12772898.
文本语料库中文档之间的语义相似性可以通过基于二维散点图布局的类似地图的隐喻实现可视化。这些布局产生于文档-术语矩阵的降维或潜在嵌入(包括主题模型)中的表示。因此,生成的布局取决于输入数据和降维的超参数,因此会受到它们变化的影响。此外,生成的布局还会受到输入数据和降维超参数变化的影响。然而,布局的这种变化需要用户付出额外的认知努力。在这项工作中,我们进行了一项敏感性研究,分析了这些布局在以下情况下的稳定性:(1) 文本语料库的变化;(2) 超参数的变化;(3) 初始化的随机性。我们的方法分为两个阶段:数据测量和数据分析。首先,我们得出了三个文本语料库和六个文本嵌入的组合布局,以及网格搜索启发的降维超参数选择。之后,我们通过十个指标量化了布局的相似性,涉及局部和全局结构以及类分离。其次,我们在描述性统计分析中分析了 42 817 个表格数据点。由此,我们得出了关于布局算法的明智决策指南,并强调了具体的超参数设置。我们在 hpicgs/Topic-Models-and-Dimensionality-Reduction-Sensitivity-Study 的 Git 仓库中提供了我们的实现,并在 DOI:10.5281/zenodo.12772898 的 Zenodo 档案中提供了结果。
{"title":"A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations","authors":"Daniel Atzberger;Tim Cech;Willy Scheibel;Jürgen Döllner;Michael Behrisch;Tobias Schreck","doi":"10.1109/TVCG.2024.3456308","DOIUrl":"10.1109/TVCG.2024.3456308","url":null,"abstract":"The semantic similarity between documents of a text corpus can be visualized using map-like metaphors based on two-dimensional scatterplot layouts. These layouts result from a dimensionality reduction on the document-term matrix or a representation within a latent embedding, including topic models. Thereby, the resulting layout depends on the input data and hyperparameters of the dimensionality reduction and is therefore affected by changes in them. Furthermore, the resulting layout is affected by changes in the input data and hyperparameters of the dimensionality reduction. However, such changes to the layout require additional cognitive efforts from the user. In this work, we present a sensitivity study that analyzes the stability of these layouts concerning (1) changes in the text corpora, (2) changes in the hyperparameter, and (3) randomness in the initialization. Our approach has two stages: data measurement and data analysis. First, we derived layouts for the combination of three text corpora and six text embeddings and a grid-search-inspired hyperparameter selection of the dimensionality reductions. Afterward, we quantified the similarity of the layouts through ten metrics, concerning local and global structures and class separation. Second, we analyzed the resulting 42 817 tabular data points in a descriptive statistical analysis. From this, we derived guidelines for informed decisions on the layout algorithm and highlight specific hyperparameter settings. We provide our implementation as a Git repository at hpicgs/Topic-Models-and-Dimensionality-Reduction-Sensitivity-Study and results as Zenodo archive at DOI:10.5281/zenodo.12772898.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"305-315"},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unmasking Dunning-Kruger Effect in Visual Reasoning & Judgment 揭示视觉推理与判断中的邓宁-克鲁格效应
Pub Date : 2024-09-17 DOI: 10.1109/TVCG.2024.3456326
Mengyu Chen;Yijun Liu;Emily Wall
The Dunning-Kruger Effect (DKE) is a metacognitive phenomenon where low-skilled individuals tend to overestimate their competence while high-skilled individuals tend to underestimate their competence. This effect has been observed in a number of domains including humor, grammar, and logic. In this paper, we explore if and how DKE manifests in visual reasoning and judgment tasks. Across two online user studies involving (1) a sliding puzzle game and (2) a scatterplot-based categorization task, we demonstrate that individuals are susceptible to DKE in visual reasoning and judgment tasks: those who performed best underestimated their performance, while bottom performers overestimated their performance. In addition, we contribute novel analyses that correlate susceptibility of DKE with personality traits and user interactions. Our findings pave the way for novel modes of bias detection via interaction patterns and establish promising directions towards interventions tailored to an individual's personality traits. All materials and analyses are in supplemental materials: https://github.com/CAV-Lab/DKE_supplemental.git.
邓宁-克鲁格效应(DKE)是一种元认知现象,即低技能者倾向于高估自己的能力,而高技能者倾向于低估自己的能力。在幽默、语法和逻辑等多个领域都观察到了这种效应。在本文中,我们将探讨 DKE 是否以及如何体现在视觉推理和判断任务中。通过两项涉及(1)滑动拼图游戏和(2)基于散点图的分类任务的在线用户研究,我们证明了个人在视觉推理和判断任务中容易受到 DKE 的影响:表现最好的人低估了自己的表现,而表现最差的人则高估了自己的表现。此外,我们还进行了新颖的分析,将 DKE 的易感性与个性特征和用户互动联系起来。我们的研究结果为通过互动模式检测偏差的新模式铺平了道路,并为针对个人个性特征的干预措施确立了有希望的方向。所有材料和分析见补充材料:https://github.com/CAV-Lab/DKE_supplemental.git。
{"title":"Unmasking Dunning-Kruger Effect in Visual Reasoning & Judgment","authors":"Mengyu Chen;Yijun Liu;Emily Wall","doi":"10.1109/TVCG.2024.3456326","DOIUrl":"10.1109/TVCG.2024.3456326","url":null,"abstract":"The Dunning-Kruger Effect (DKE) is a metacognitive phenomenon where low-skilled individuals tend to overestimate their competence while high-skilled individuals tend to underestimate their competence. This effect has been observed in a number of domains including humor, grammar, and logic. In this paper, we explore if and how DKE manifests in visual reasoning and judgment tasks. Across two online user studies involving (1) a sliding puzzle game and (2) a scatterplot-based categorization task, we demonstrate that individuals are susceptible to DKE in visual reasoning and judgment tasks: those who performed best underestimated their performance, while bottom performers overestimated their performance. In addition, we contribute novel analyses that correlate susceptibility of DKE with personality traits and user interactions. Our findings pave the way for novel modes of bias detection via interaction patterns and establish promising directions towards interventions tailored to an individual's personality traits. All materials and analyses are in supplemental materials: https://github.com/CAV-Lab/DKE_supplemental.git.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"743-753"},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding Visualization Authoring Techniques for Genomics Data in the Context of Personas and Tasks 从角色和任务的角度了解基因组学数据的可视化创作技术
Pub Date : 2024-09-17 DOI: 10.1109/TVCG.2024.3456298
Astrid van den Brandt;Sehi L'Yi;Huyen N. Nguyen;Anna Vilanova;Nils Gehlenborg
Genomics experts rely on visualization to extract and share insights from complex and large-scale datasets. Beyond off-the-shelf tools for data exploration, there is an increasing need for platforms that aid experts in authoring customized visualizations for both exploration and communication of insights. A variety of interactive techniques have been proposed for authoring data visualizations, such as template editing, shelf configuration, natural language input, and code editors. However, it remains unclear how genomics experts create visualizations and which techniques best support their visualization tasks and needs. To address this gap, we conducted two user studies with genomics researchers: (1) semi-structured interviews (n=20) to identify the tasks, user contexts, and current visualization authoring techniques and (2) an exploratory study (n=13) using visual probes to elicit users' intents and desired techniques when creating visualizations. Our contributions include (1) a characterization of how visualization authoring is currently utilized in genomics visualization, identifying limitations and benefits in light of common criteria for authoring tools, and (2) generalizable design implications for genomics visualization authoring tools based on our findings on task- and user-specific usefulness of authoring techniques. All supplemental materials are available at https://osf.io/bdj4v/
基因组学专家依靠可视化从复杂和大规模的数据集中提取和分享见解。除了用于数据探索的现成工具之外,越来越需要帮助专家编写定制可视化的平台,以进行探索和交流见解。已经提出了各种用于编写数据可视化的交互技术,例如模板编辑、架子配置、自然语言输入和代码编辑器。然而,目前尚不清楚基因组学专家如何创建可视化,以及哪种技术最能支持他们的可视化任务和需求。为了解决这一差距,我们与基因组学研究人员进行了两项用户研究:(1)半结构化访谈(n=20),以确定任务、用户上下文和当前的可视化创作技术;(2)一项探索性研究(n=13),使用视觉探针来引出用户在创建可视化时的意图和所需技术。我们的贡献包括(1)描述可视化创作目前如何在基因组可视化中使用,根据创作工具的共同标准确定局限性和优点,以及(2)基于我们对创作技术的任务和用户特定有用性的发现,基因组可视化创作工具的一般设计含义。所有补充材料可在https://osf.io/bdj4v/上获得
{"title":"Understanding Visualization Authoring Techniques for Genomics Data in the Context of Personas and Tasks","authors":"Astrid van den Brandt;Sehi L'Yi;Huyen N. Nguyen;Anna Vilanova;Nils Gehlenborg","doi":"10.1109/TVCG.2024.3456298","DOIUrl":"10.1109/TVCG.2024.3456298","url":null,"abstract":"Genomics experts rely on visualization to extract and share insights from complex and large-scale datasets. Beyond off-the-shelf tools for data exploration, there is an increasing need for platforms that aid experts in authoring customized visualizations for both exploration and communication of insights. A variety of interactive techniques have been proposed for authoring data visualizations, such as template editing, shelf configuration, natural language input, and code editors. However, it remains unclear how genomics experts create visualizations and which techniques best support their visualization tasks and needs. To address this gap, we conducted two user studies with genomics researchers: (1) semi-structured interviews (n=20) to identify the tasks, user contexts, and current visualization authoring techniques and (2) an exploratory study (n=13) using visual probes to elicit users' intents and desired techniques when creating visualizations. Our contributions include (1) a characterization of how visualization authoring is currently utilized in genomics visualization, identifying limitations and benefits in light of common criteria for authoring tools, and (2) generalizable design implications for genomics visualization authoring tools based on our findings on task- and user-specific usefulness of authoring techniques. All supplemental materials are available at https://osf.io/bdj4v/","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"1180-1190"},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10681582","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gesture2Text: A Generalizable Decoder for Word-Gesture Keyboards in XR Through Trajectory Coarse Discretization and Pre-Training Gesture2Text:通过轨迹粗离散化和预训练为 XR 中的文字手势键盘设计通用解码器。
Pub Date : 2024-09-16 DOI: 10.1109/TVCG.2024.3456198
Junxiao Shen;Khadija Khaldi;Enmin Zhou;Hemant Bhaskar Surale;Amy Karlson
Text entry with word-gesture keyboards (WGK) is emerging as a popular method and becoming a key interaction for Extended Reality (XR). However, the diversity of interaction modes, keyboard sizes, and visual feedback in these environments introduces divergent word-gesture trajectory data patterns, thus leading to complexity in decoding trajectories into text. Template-matching decoding methods, such as SHARK2 [32], are commonly used for these WGK systems because they are easy to implement and configure. However, these methods are susceptible to decoding inaccuracies for noisy trajectories. While conventional neural-network-based decoders (neural decoders) trained on word-gesture trajectory data have been proposed to improve accuracy, they have their own limitations: they require extensive data for training and deep-learning expertise for implementation. To address these challenges, we propose a novel solution that combines ease of implementation with high decoding accuracy: a generalizable neural decoder enabled by pre-training on large-scale coarsely discretized word-gesture trajectories. This approach produces a ready-to-use WGK decoder that is generalizable across mid-air and on-surface WGK systems in augmented reality (AR) and virtual reality (VR), which is evident by a robust average Top-4 accuracy of 90.4% on four diverse datasets. It significantly outperforms SHARK2 with a 37.2% enhancement and surpasses the conventional neural decoder by 7.4%. Moreover, the Pre-trained Neural Decoder's size is only 4 MB after quantization, without sacrificing accuracy, and it can operate in real-time, executing in just 97 milliseconds on Quest 3.
使用文字手势键盘(WGK)输入文本正在成为一种流行的方法,并成为扩展现实(XR)的一种关键交互方式。然而,在这些环境中,交互模式、键盘尺寸和视觉反馈的多样性带来了不同的文字手势轨迹数据模式,从而导致将轨迹解码为文本的复杂性。模板匹配解码方法(如 SHARK2 [32])通常用于这些 WGK 系统,因为它们易于实现和配置。然而,这些方法在解码噪声轨迹时容易出现误差。虽然有人提出了基于神经网络的传统解码器(神经解码器)来提高准确性,但它们也有自身的局限性:它们需要大量数据进行训练,并需要深度学习的专业知识来实现。为了应对这些挑战,我们提出了一种新颖的解决方案,该方案兼具易实施性和高解码准确性:通过在大规模粗离散词句轨迹上进行预训练,实现可通用的神经解码器。这种方法产生了一种即用型 WGK 解码器,可用于增强现实(AR)和虚拟现实(VR)中的空中和地面 WGK 系统,在四个不同的数据集上,Top-4 平均准确率高达 90.4%。它明显优于 SHARK2,提高了 37.2%,比传统神经解码器高出 7.4%。此外,预训练神经解码器在量化后的大小仅为 4 MB,且不影响准确性,而且可以实时运行,在 Quest 3 上的执行时间仅为 97 毫秒。
{"title":"Gesture2Text: A Generalizable Decoder for Word-Gesture Keyboards in XR Through Trajectory Coarse Discretization and Pre-Training","authors":"Junxiao Shen;Khadija Khaldi;Enmin Zhou;Hemant Bhaskar Surale;Amy Karlson","doi":"10.1109/TVCG.2024.3456198","DOIUrl":"10.1109/TVCG.2024.3456198","url":null,"abstract":"Text entry with word-gesture keyboards (WGK) is emerging as a popular method and becoming a key interaction for Extended Reality (XR). However, the diversity of interaction modes, keyboard sizes, and visual feedback in these environments introduces divergent word-gesture trajectory data patterns, thus leading to complexity in decoding trajectories into text. Template-matching decoding methods, such as SHARK2 [32], are commonly used for these WGK systems because they are easy to implement and configure. However, these methods are susceptible to decoding inaccuracies for noisy trajectories. While conventional neural-network-based decoders (neural decoders) trained on word-gesture trajectory data have been proposed to improve accuracy, they have their own limitations: they require extensive data for training and deep-learning expertise for implementation. To address these challenges, we propose a novel solution that combines ease of implementation with high decoding accuracy: a generalizable neural decoder enabled by pre-training on large-scale coarsely discretized word-gesture trajectories. This approach produces a ready-to-use WGK decoder that is generalizable across mid-air and on-surface WGK systems in augmented reality (AR) and virtual reality (VR), which is evident by a robust average Top-4 accuracy of 90.4% on four diverse datasets. It significantly outperforms SHARK2 with a 37.2% enhancement and surpasses the conventional neural decoder by 7.4%. Moreover, the Pre-trained Neural Decoder's size is only 4 MB after quantization, without sacrificing accuracy, and it can operate in real-time, executing in just 97 milliseconds on Quest 3.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"7118-7128"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CataAnno: An Ancient Catalog Annotator for Annotation Cleaning by Recommendation CataAnno:通过推荐进行注释清理的古代目录注释器
Pub Date : 2024-09-16 DOI: 10.1109/TVCG.2024.3456379
Hanning Shao;Xiaoru Yuan
Classical bibliography, by researching preserved catalogs from both official archives and personal collections of accumulated books, examines the books throughout history, thereby revealing cultural development across historical periods. In this work, we collaborate with domain experts to accomplish the task of data annotation concerning Chinese ancient catalogs. We introduce the CataAnno system that facilitates users in completing annotations more efficiently through cross-linked views, recommendation methods and convenient annotation interactions. The recommendation method can learn the background knowledge and annotation patterns that experts subconsciously integrate into the data during prior annotation processes. CataAnno searches for the most relevant examples previously annotated and recommends to the user. Meanwhile, the cross-linked views assist users in comprehending the correlations between entries and offer explanations for these recommendations. Evaluation and expert feedback confirm that the CataAnno system, by offering high-quality recommendations and visualizing the relationships between entries, can mitigate the necessity for specialized knowledge during the annotation process. This results in enhanced accuracy and consistency in annotations, thereby enhancing the overall efficiency.
古典目录学通过研究官方档案和个人藏书中保存下来的目录,考察书籍的历史沿革,从而揭示不同历史时期的文化发展。在这项工作中,我们与领域专家合作完成了有关中国古代目录的数据注释任务。我们介绍的 CataAnno 系统通过交叉链接视图、推荐方法和便捷的注释交互,帮助用户更高效地完成注释。推荐方法可以学习专家在之前的注释过程中潜意识融入数据的背景知识和注释模式。CataAnno 会搜索之前注释过的最相关示例并推荐给用户。同时,交叉链接视图可帮助用户理解条目之间的相关性,并为这些推荐提供解释。评估和专家反馈证实,CataAnno 系统通过提供高质量的推荐和条目间关系的可视化,可以减少注释过程中对专业知识的需求。这就提高了注释的准确性和一致性,从而提高了整体效率。
{"title":"CataAnno: An Ancient Catalog Annotator for Annotation Cleaning by Recommendation","authors":"Hanning Shao;Xiaoru Yuan","doi":"10.1109/TVCG.2024.3456379","DOIUrl":"10.1109/TVCG.2024.3456379","url":null,"abstract":"Classical bibliography, by researching preserved catalogs from both official archives and personal collections of accumulated books, examines the books throughout history, thereby revealing cultural development across historical periods. In this work, we collaborate with domain experts to accomplish the task of data annotation concerning Chinese ancient catalogs. We introduce the CataAnno system that facilitates users in completing annotations more efficiently through cross-linked views, recommendation methods and convenient annotation interactions. The recommendation method can learn the background knowledge and annotation patterns that experts subconsciously integrate into the data during prior annotation processes. CataAnno searches for the most relevant examples previously annotated and recommends to the user. Meanwhile, the cross-linked views assist users in comprehending the correlations between entries and offer explanations for these recommendations. Evaluation and expert feedback confirm that the CataAnno system, by offering high-quality recommendations and visualizing the relationships between entries, can mitigate the necessity for specialized knowledge during the annotation process. This results in enhanced accuracy and consistency in annotations, thereby enhancing the overall efficiency.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"404-414"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shape It Up: An Empirically Grounded Approach for Designing Shape Palettes 塑造它:基于经验的形状调色板设计方法》。
Pub Date : 2024-09-16 DOI: 10.1109/TVCG.2024.3456385
Chin Tseng;Arran Zeyu Wang;Ghulam Jilani Quadri;Danielle Albers Szafir
Shape is commonly used to distinguish between categories in multi-class scatterplots. However, existing guidelines for choosing effective shape palettes rely largely on intuition and do not consider how these needs may change as the number of categories increases. Unlike color, shapes can not be represented by a numerical space, making it difficult to propose general guidelines or design heuristics for using shape effectively. This paper presents a series of four experiments evaluating the efficiency of 39 shapes across three tasks: relative mean judgment tasks, expert preference, and correlation estimation. Our results show that conventional means for reasoning about shapes, such as filled versus unfilled, are insufficient to inform effective palette design. Further, even expert palettes vary significantly in their use of shape and corresponding effectiveness. To support effective shape palette design, we developed a model based on pairwise relations between shapes in our experiments and the number of shapes required for a given design. We embed this model in a palette design tool to give designers agency over shape selection while incorporating empirical elements of perceptual performance captured in our study. Our model advances understanding of shape perception in visualization contexts and provides practical design guidelines that can help improve categorical data encodings.
形状通常用于区分多类别散点图中的类别。然而,现有的选择有效形状调色板的指南主要依靠直觉,并没有考虑随着类别数量的增加,这些需求会如何变化。与颜色不同,形状不能用数字空间来表示,因此很难提出有效使用形状的通用指南或设计启发式方法。本文介绍了一系列四项实验,评估了 39 种形状在三项任务中的效率:相对平均值判断任务、专家偏好和相关性估计。我们的结果表明,对形状进行推理的传统方法,如填充与非填充,不足以指导有效的调色板设计。此外,即使是专家调色板,在使用形状和相应的有效性方面也存在很大差异。为了支持有效的形状调色板设计,我们根据实验中形状之间的配对关系以及特定设计所需的形状数量开发了一个模型。我们将这一模型嵌入到调色板设计工具中,使设计者能够自主选择形状,同时将我们研究中捕捉到的感知性能的经验要素纳入其中。我们的模型加深了人们对可视化环境中形状感知的理解,并提供了有助于改进分类数据编码的实用设计指南。
{"title":"Shape It Up: An Empirically Grounded Approach for Designing Shape Palettes","authors":"Chin Tseng;Arran Zeyu Wang;Ghulam Jilani Quadri;Danielle Albers Szafir","doi":"10.1109/TVCG.2024.3456385","DOIUrl":"10.1109/TVCG.2024.3456385","url":null,"abstract":"Shape is commonly used to distinguish between categories in multi-class scatterplots. However, existing guidelines for choosing effective shape palettes rely largely on intuition and do not consider how these needs may change as the number of categories increases. Unlike color, shapes can not be represented by a numerical space, making it difficult to propose general guidelines or design heuristics for using shape effectively. This paper presents a series of four experiments evaluating the efficiency of 39 shapes across three tasks: relative mean judgment tasks, expert preference, and correlation estimation. Our results show that conventional means for reasoning about shapes, such as filled versus unfilled, are insufficient to inform effective palette design. Further, even expert palettes vary significantly in their use of shape and corresponding effectiveness. To support effective shape palette design, we developed a model based on pairwise relations between shapes in our experiments and the number of shapes required for a given design. We embed this model in a palette design tool to give designers agency over shape selection while incorporating empirical elements of perceptual performance captured in our study. Our model advances understanding of shape perception in visualization contexts and provides practical design guidelines that can help improve categorical data encodings.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"349-359"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DracoGPT: Extracting Visualization Design Preferences from Large Language Models DracoGPT:从大型语言模型中提取可视化设计偏好
Pub Date : 2024-09-16 DOI: 10.1109/TVCG.2024.3456350
Huichen Will Wang;Mitchell Gordon;Leilani Battle;Jeffrey Heer
Trained on vast corpora, Large Language Models (LLMs) have the potential to encode visualization design knowledge and best practices. However, if they fail to do so, they might provide unreliable visualization recommendations. What visualization design preferences, then, have LLMs learned? We contribute DracoGPT, a method for extracting, modeling, and assessing visualization design preferences from LLMs. To assess varied tasks, we develop two pipelines—DracoGPT-Rank and DracoGPT-Recommend—to model LLMs prompted to either rank or recommend visual encoding specifications. We use Draco as a shared knowledge base in which to represent LLM design preferences and compare them to best practices from empirical research. We demonstrate that DracoGPT can accurately model the preferences expressed by LLMs, enabling analysis in terms of Draco design constraints. Across a suite of backing LLMs, we find that DracoGPT-Rank and DracoGPT-Recommend moderately agree with each other, but both substantially diverge from guidelines drawn from human subjects experiments. Future work can build on our approach to expand Draco's knowledge base to model a richer set of preferences and to provide a robust and cost-effective stand-in for LLMs.
大型语言模型(LLM)在庞大的语料库中经过训练,有可能编码可视化设计知识和最佳实践。然而,如果它们做不到这一点,就有可能提供不可靠的可视化建议。那么,LLM学到了哪些可视化设计偏好呢?我们贡献了DracoGPT,这是一种从LLMs中提取、建模和评估可视化设计偏好的方法。为了评估不同的任务,我们开发了两个管道--DracoGPT-Rank 和 DracoGPT-Recommend--来对 LLM 进行建模,促使其对可视化编码规范进行排序或推荐。我们使用 Draco 作为共享知识库,在其中表示 LLM 的设计偏好,并将其与经验研究中的最佳实践进行比较。我们证明,DracoGPT 可以准确地模拟 LLM 所表达的偏好,并能根据 Draco 设计约束进行分析。我们发现,DracoGPT-Rank 和 DracoGPT-Recommend在一定程度上相互吻合,但两者都大大偏离了从人体实验中得出的指导原则。未来的工作可以以我们的方法为基础,扩展 Draco 的知识库,为更丰富的偏好集建模,并为 LLMs 提供一个稳健且经济高效的替身。
{"title":"DracoGPT: Extracting Visualization Design Preferences from Large Language Models","authors":"Huichen Will Wang;Mitchell Gordon;Leilani Battle;Jeffrey Heer","doi":"10.1109/TVCG.2024.3456350","DOIUrl":"10.1109/TVCG.2024.3456350","url":null,"abstract":"Trained on vast corpora, Large Language Models (LLMs) have the potential to encode visualization design knowledge and best practices. However, if they fail to do so, they might provide unreliable visualization recommendations. What visualization design preferences, then, have LLMs learned? We contribute DracoGPT, a method for extracting, modeling, and assessing visualization design preferences from LLMs. To assess varied tasks, we develop two pipelines—DracoGPT-Rank and DracoGPT-Recommend—to model LLMs prompted to either rank or recommend visual encoding specifications. We use Draco as a shared knowledge base in which to represent LLM design preferences and compare them to best practices from empirical research. We demonstrate that DracoGPT can accurately model the preferences expressed by LLMs, enabling analysis in terms of Draco design constraints. Across a suite of backing LLMs, we find that DracoGPT-Rank and DracoGPT-Recommend moderately agree with each other, but both substantially diverge from guidelines drawn from human subjects experiments. Future work can build on our approach to expand Draco's knowledge base to model a richer set of preferences and to provide a robust and cost-effective stand-in for LLMs.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"710-720"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HaptoFloater: Visuo-Haptic Augmented Reality by Embedding Imperceptible Color Vibration Signals for Tactile Display Control in a Mid-Air Image HaptoFloater:通过在半空图像中嵌入用于触觉显示控制的可感知彩色振动信号,实现视觉-触觉增强现实技术
Pub Date : 2024-09-16 DOI: 10.1109/TVCG.2024.3456175
Rina Nagano;Takahiro Kinoshita;Shingo Hattori;Yuichi Hiroi;Yuta Itoh;Takefumi Hiraki
We propose HaptoFloater, a low-latency mid-air visuo-haptic augmented reality (VHAR) system that utilizes imperceptible color vibrations. When adding tactile stimuli to the visual information of a mid-air image, the user should not perceive the latency between the tactile and visual information. However, conventional tactile presentation methods for mid-air images, based on camera-detected fingertip positioning, introduce latency due to image processing and communication. To mitigate this latency, we use a color vibration technique; humans cannot perceive the vibration when the display alternates between two different color stimuli at a frequency of 25 Hz or higher. In our system, we embed this imperceptible color vibration into the mid-air image formed by a micromirror array plate, and a photodiode on the fingertip device directly detects this color vibration to provide tactile stimulation. Thus, our system allows for the tactile perception of multiple patterns on a mid-air image in 59.5 ms. In addition, we evaluate the visual-haptic delay tolerance on a mid-air display using our VHAR system and a tactile actuator with a single pattern and faster response time. The results of our user study indicate a visual-haptic delay tolerance of 110.6 ms, which is considerably larger than the latency associated with systems using multiple tactile patterns.
我们提出的 HaptoFloater 是一种低延迟的半空视觉-触觉增强现实(VHAR)系统,它利用了不易察觉的颜色振动。当在半空中图像的视觉信息中添加触觉刺激时,用户应该感觉不到触觉信息和视觉信息之间的延迟。然而,传统的半空图像触觉呈现方法基于相机检测到的指尖定位,会因图像处理和通信而产生延迟。为了减少这种延迟,我们采用了色彩振动技术;当显示屏以 25 赫兹或更高的频率交替显示两种不同的色彩刺激时,人类无法感知振动。在我们的系统中,我们将这种不易察觉的色彩振动嵌入微镜阵列板形成的半空图像中,指尖装置上的光电二极管直接检测这种色彩振动,从而提供触觉刺激。因此,我们的系统可在 59.5 毫秒内对半空图像上的多个图案进行触觉感知。此外,我们还评估了使用我们的 VHAR 系统和具有单一图案和更快响应时间的触觉致动器在半空中显示屏上的视觉-触觉延迟容忍度。用户研究结果表明,视觉-触觉延迟耐受时间为 110.6 毫秒,大大高于使用多种触觉图案的系统的延迟时间。
{"title":"HaptoFloater: Visuo-Haptic Augmented Reality by Embedding Imperceptible Color Vibration Signals for Tactile Display Control in a Mid-Air Image","authors":"Rina Nagano;Takahiro Kinoshita;Shingo Hattori;Yuichi Hiroi;Yuta Itoh;Takefumi Hiraki","doi":"10.1109/TVCG.2024.3456175","DOIUrl":"10.1109/TVCG.2024.3456175","url":null,"abstract":"We propose HaptoFloater, a low-latency mid-air visuo-haptic augmented reality (VHAR) system that utilizes imperceptible color vibrations. When adding tactile stimuli to the visual information of a mid-air image, the user should not perceive the latency between the tactile and visual information. However, conventional tactile presentation methods for mid-air images, based on camera-detected fingertip positioning, introduce latency due to image processing and communication. To mitigate this latency, we use a color vibration technique; humans cannot perceive the vibration when the display alternates between two different color stimuli at a frequency of 25 Hz or higher. In our system, we embed this imperceptible color vibration into the mid-air image formed by a micromirror array plate, and a photodiode on the fingertip device directly detects this color vibration to provide tactile stimulation. Thus, our system allows for the tactile perception of multiple patterns on a mid-air image in 59.5 ms. In addition, we evaluate the visual-haptic delay tolerance on a mid-air display using our VHAR system and a tactile actuator with a single pattern and faster response time. The results of our user study indicate a visual-haptic delay tolerance of 110.6 ms, which is considerably larger than the latency associated with systems using multiple tactile patterns.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"7463-7472"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PREVis: Perceived Readability Evaluation for Visualizations PREVis:可视化感知可读性评估
Pub Date : 2024-09-16 DOI: 10.1109/TVCG.2024.3456318
Anne-Flore Cabouat;Tingying He;Petra Isenberg;Tobias Isenberg
We developed and validated an instrument to measure the perceived readability in data visualization: PREVis. Researchers and practitioners can easily use this instrument as part of their evaluations to compare the perceived readability of different visual data representations. Our instrument can complement results from controlled experiments on user task performance or provide additional data during in-depth qualitative work such as design iterations when developing a new technique. Although readability is recognized as an essential quality of data visualizations, so far there has not been a unified definition of the construct in the context of visual representations. As a result, researchers often lack guidance for determining how to ask people to rate their perceived readability of a visualization. To address this issue, we engaged in a rigorous process to develop the first validated instrument targeted at the subjective readability of visual data representations. Our final instrument consists of 11 items across 4 dimensions: understandability, layout clarity, readability of data values, and readability of data patterns. We provide the questionnaire as a document with implementation guidelines on osf.io/9cg8j. Beyond this instrument, we contribute a discussion of how researchers have previously assessed visualization readability, and an analysis of the factors underlying perceived readability in visual data representations.
我们开发并验证了一种测量数据可视化可读性的工具:PREVis。研究人员和从业人员可以轻松地使用该工具作为评估的一部分,以比较不同可视化数据表示的可读性。我们的工具可以补充用户任务执行情况对照实验的结果,或在深入的定性工作(如开发新技术时的设计迭代)中提供额外的数据。尽管可读性被认为是数据可视化的一项基本质量,但迄今为止,在可视化表征方面还没有一个统一的定义。因此,研究人员在确定如何让人们对可视化的可读性进行评分时往往缺乏指导。为了解决这个问题,我们通过严格的程序,开发出了首个针对可视化数据表示主观可读性的验证工具。我们的最终工具由 11 个项目组成,涵盖 4 个维度:可理解性、布局清晰度、数据值可读性和数据模式可读性。我们在osf.io/9cg8j上以文档形式提供了该问卷及实施指南。除了这份问卷,我们还讨论了研究人员之前是如何评估可视化可读性的,并分析了可视化数据表示中可感知可读性的基本因素。
{"title":"PREVis: Perceived Readability Evaluation for Visualizations","authors":"Anne-Flore Cabouat;Tingying He;Petra Isenberg;Tobias Isenberg","doi":"10.1109/TVCG.2024.3456318","DOIUrl":"10.1109/TVCG.2024.3456318","url":null,"abstract":"We developed and validated an instrument to measure the perceived readability in data visualization: PREVis. Researchers and practitioners can easily use this instrument as part of their evaluations to compare the perceived readability of different visual data representations. Our instrument can complement results from controlled experiments on user task performance or provide additional data during in-depth qualitative work such as design iterations when developing a new technique. Although readability is recognized as an essential quality of data visualizations, so far there has not been a unified definition of the construct in the context of visual representations. As a result, researchers often lack guidance for determining how to ask people to rate their perceived readability of a visualization. To address this issue, we engaged in a rigorous process to develop the first validated instrument targeted at the subjective readability of visual data representations. Our final instrument consists of 11 items across 4 dimensions: understandability, layout clarity, readability of data values, and readability of data patterns. We provide the questionnaire as a document with implementation guidelines on osf.io/9cg8j. Beyond this instrument, we contribute a discussion of how researchers have previously assessed visualization readability, and an analysis of the factors underlying perceived readability in visual data representations.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"1083-1093"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1