首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow AdversaFlow:利用多层次对抗流为大型语言模型提供可视化红队服务
Pub Date : 2024-09-16 DOI: 10.1109/TVCG.2024.3456150
Dazhen Deng;Chuhan Zhang;Huawei Zheng;Yuwen Pu;Shouling Ji;Yingcai Wu
Large Language Models (LLMs) are powerful but also raise significant security concerns, particularly regarding the harm they can cause, such as generating fake news that manipulates public opinion on social media and providing responses to unethical activities. Traditional red teaming approaches for identifying AI vulnerabilities rely on manual prompt construction and expertise. This paper introduces AdversaFlow, a novel visual analytics system designed to enhance LLM security against adversarial attacks through human-AI collaboration. AdversaFlow involves adversarial training between a target model and a red model, featuring unique multi-level adversarial flow and fluctuation path visualizations. These features provide insights into adversarial dynamics and LLM robustness, enabling experts to identify and mitigate vulnerabilities effectively. We present quantitative evaluations and case studies validating our system's utility and offering insights for future AI security solutions. Our method can enhance LLM security, supporting downstream scenarios like social media regulation by enabling more effective detection, monitoring, and mitigation of harmful content and behaviors.
大型语言模型(LLM)功能强大,但也引发了严重的安全问题,尤其是它们可能造成的危害,例如生成假新闻,操纵社交媒体上的舆论,以及对不道德活动做出回应。识别人工智能漏洞的传统红队方法依赖于人工提示构建和专业知识。本文介绍了 AdversaFlow,这是一种新颖的可视化分析系统,旨在通过人机协作提高 LLM 的安全性,抵御对抗性攻击。AdversaFlow 涉及目标模型和红色模型之间的对抗训练,具有独特的多级对抗流和波动路径可视化功能。这些功能有助于深入了解对抗动态和 LLM 的鲁棒性,使专家能够有效地识别和缓解漏洞。我们介绍了定量评估和案例研究,验证了我们系统的实用性,并为未来的人工智能安全解决方案提供了启示。我们的方法可以增强 LLM 的安全性,通过更有效地检测、监控和缓解有害内容和行为,为社交媒体监管等下游场景提供支持。
{"title":"AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow","authors":"Dazhen Deng;Chuhan Zhang;Huawei Zheng;Yuwen Pu;Shouling Ji;Yingcai Wu","doi":"10.1109/TVCG.2024.3456150","DOIUrl":"10.1109/TVCG.2024.3456150","url":null,"abstract":"Large Language Models (LLMs) are powerful but also raise significant security concerns, particularly regarding the harm they can cause, such as generating fake news that manipulates public opinion on social media and providing responses to unethical activities. Traditional red teaming approaches for identifying AI vulnerabilities rely on manual prompt construction and expertise. This paper introduces AdversaFlow, a novel visual analytics system designed to enhance LLM security against adversarial attacks through human-AI collaboration. AdversaFlow involves adversarial training between a target model and a red model, featuring unique multi-level adversarial flow and fluctuation path visualizations. These features provide insights into adversarial dynamics and LLM robustness, enabling experts to identify and mitigate vulnerabilities effectively. We present quantitative evaluations and case studies validating our system's utility and offering insights for future AI security solutions. Our method can enhance LLM security, supporting downstream scenarios like social media regulation by enabling more effective detection, monitoring, and mitigation of harmful content and behaviors.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"492-502"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Does This Have a Particular Meaning? Interactive Pattern Explanation for Network Visualizations 这有什么特殊含义吗?网络可视化的互动模式解释
Pub Date : 2024-09-16 DOI: 10.1109/TVCG.2024.3456192
Xinhuan Shu;Alexis Pister;Junxiu Tang;Fanny Chevalier;Benjamin Bach
This paper presents an interactive technique to explain visual patterns in network visualizations to analysts who do not understand these visualizations and who are learning to read them. Learning a visualization requires mastering its visual grammar and decoding information presented through visual marks, graphical encodings, and spatial configurations. To help people learn network visualization designs and extract meaningful information, we introduce the concept of interactive pattern explanation that allows viewers to select an arbitrary area in a visualization, then automatically mines the underlying data patterns, and explains both visual and data patterns present in the viewer's selection. In a qualitative and a quantitative user study with a total of 32 participants, we compare interactive pattern explanations to textual-only and visual-only (cheatsheets) explanations. Our results show that interactive explanations increase learning of i) unfamiliar visualizations, ii) patterns in network science, and iii) the respective network terminology.
本文介绍了一种互动技术,用于向不了解网络可视化的分析人员解释网络可视化中的视觉模式,这些分析人员正在学习如何阅读这些可视化。学习可视化需要掌握其视觉语法,并解码通过视觉标记、图形编码和空间配置呈现的信息。为了帮助人们学习网络可视化设计并提取有意义的信息,我们引入了交互式模式解释的概念,允许观众在可视化中任意选择一个区域,然后自动挖掘潜在的数据模式,并解释观众选择中的视觉和数据模式。在一项共有 32 人参与的定性和定量用户研究中,我们将交互式模式解释与纯文字解释和纯视觉解释(小抄)进行了比较。研究结果表明,交互式解释提高了对 i) 陌生可视化、ii) 网络科学模式和 iii) 相关网络术语的学习效率。
{"title":"Does This Have a Particular Meaning? Interactive Pattern Explanation for Network Visualizations","authors":"Xinhuan Shu;Alexis Pister;Junxiu Tang;Fanny Chevalier;Benjamin Bach","doi":"10.1109/TVCG.2024.3456192","DOIUrl":"10.1109/TVCG.2024.3456192","url":null,"abstract":"This paper presents an interactive technique to explain visual patterns in network visualizations to analysts who do not understand these visualizations and who are learning to read them. Learning a visualization requires mastering its visual grammar and decoding information presented through visual marks, graphical encodings, and spatial configurations. To help people learn network visualization designs and extract meaningful information, we introduce the concept of interactive pattern explanation that allows viewers to select an arbitrary area in a visualization, then automatically mines the underlying data patterns, and explains both visual and data patterns present in the viewer's selection. In a qualitative and a quantitative user study with a total of 32 participants, we compare interactive pattern explanations to textual-only and visual-only (cheatsheets) explanations. Our results show that interactive explanations increase learning of i) unfamiliar visualizations, ii) patterns in network science, and iii) the respective network terminology.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"677-687"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Aligned are Human Chart Takeaways and LLM Predictions? A Case Study on Bar Charts with Varying Layouts 人类图表的启示与 LLM 预测的一致性如何?不同布局条形图案例研究
Pub Date : 2024-09-16 DOI: 10.1109/TVCG.2024.3456378
Huichen Will Wang;Jane Hoffswell;Sao Myat Thazin Thane;Victor S. Bursztyn;Cindy Xiong Bearfield
Large Language Models (LLMs) have been adopted for a variety of visualizations tasks, but how far are we from perceptually aware LLMs that can predict human takeaways? Graphical perception literature has shown that human chart takeaways are sensitive to visualization design choices, such as spatial layouts. In this work, we examine the extent to which LLMs exhibit such sensitivity when generating takeaways, using bar charts with varying spatial layouts as a case study. We conducted three experiments and tested four common bar chart layouts: vertically juxtaposed, horizontally juxtaposed, overlaid, and stacked. In Experiment 1, we identified the optimal configurations to generate meaningful chart takeaways by testing four LLMs, two temperature settings, nine chart specifications, and two prompting strategies. We found that even state-of-the-art LLMs struggled to generate semantically diverse and factually accurate takeaways. In Experiment 2, we used the optimal configurations to generate 30 chart takeaways each for eight visualizations across four layouts and two datasets in both zero-shot and one-shot settings. Compared to human takeaways, we found that the takeaways LLMs generated often did not match the types of comparisons made by humans. In Experiment 3, we examined the effect of chart context and data on LLM takeaways. We found that LLMs, unlike humans, exhibited variation in takeaway comparison types for different bar charts using the same bar layout. Overall, our case study evaluates the ability of LLMs to emulate human interpretations of data and points to challenges and opportunities in using LLMs to predict human chart takeaways.
大型语言模型(LLMs)已被用于各种可视化任务,但我们离能够预测人类收获的感知型 LLMs 还有多远?图形感知方面的文献表明,人类对图表的理解对可视化设计选择(如空间布局)很敏感。在这项工作中,我们以具有不同空间布局的条形图为例,研究了 LLM 在生成提示时在多大程度上表现出了这种敏感性。我们进行了三次实验,测试了四种常见的条形图布局:垂直并列、水平并列、叠加和堆叠。在实验 1 中,我们通过测试四种 LLM、两种温度设置、九种图表规格和两种提示策略,确定了生成有意义图表的最佳配置。我们发现,即使是最先进的 LLM 也很难生成语义多样、事实准确的提要。在实验 2 中,我们使用最优配置为四种布局和两个数据集中的八种可视化内容分别生成了 30 条图表提要,并同时采用了零点击和单点击两种设置。我们发现,与人类的示意图相比,LLM 生成的示意图往往与人类的比较类型不一致。在实验 3 中,我们研究了图表上下文和数据对 LLM 推断的影响。我们发现,与人类不同的是,对于使用相同条形图布局的不同条形图,LLM 在得出的比较类型方面表现出差异。总之,我们的案例研究评估了 LLM 模仿人类解释数据的能力,并指出了使用 LLM 预测人类图表结论所面临的挑战和机遇。
{"title":"How Aligned are Human Chart Takeaways and LLM Predictions? A Case Study on Bar Charts with Varying Layouts","authors":"Huichen Will Wang;Jane Hoffswell;Sao Myat Thazin Thane;Victor S. Bursztyn;Cindy Xiong Bearfield","doi":"10.1109/TVCG.2024.3456378","DOIUrl":"10.1109/TVCG.2024.3456378","url":null,"abstract":"Large Language Models (LLMs) have been adopted for a variety of visualizations tasks, but how far are we from perceptually aware LLMs that can predict human takeaways? Graphical perception literature has shown that human chart takeaways are sensitive to visualization design choices, such as spatial layouts. In this work, we examine the extent to which LLMs exhibit such sensitivity when generating takeaways, using bar charts with varying spatial layouts as a case study. We conducted three experiments and tested four common bar chart layouts: vertically juxtaposed, horizontally juxtaposed, overlaid, and stacked. In Experiment 1, we identified the optimal configurations to generate meaningful chart takeaways by testing four LLMs, two temperature settings, nine chart specifications, and two prompting strategies. We found that even state-of-the-art LLMs struggled to generate semantically diverse and factually accurate takeaways. In Experiment 2, we used the optimal configurations to generate 30 chart takeaways each for eight visualizations across four layouts and two datasets in both zero-shot and one-shot settings. Compared to human takeaways, we found that the takeaways LLMs generated often did not match the types of comparisons made by humans. In Experiment 3, we examined the effect of chart context and data on LLM takeaways. We found that LLMs, unlike humans, exhibited variation in takeaway comparison types for different bar charts using the same bar layout. Overall, our case study evaluates the ability of LLMs to emulate human interpretations of data and points to challenges and opportunities in using LLMs to predict human chart takeaways.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"536-546"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discursive Patinas: Anchoring Discussions in Data Visualizations 论述性的彩绘:数据可视化中的锚定讨论
Pub Date : 2024-09-13 DOI: 10.1109/TVCG.2024.3456334
Tobias Kauer;Derya Akbaba;Marian Dörk;Benjamin Bach
This paper presents discursive patinas, a technique to visualize discussions onto data visualizations, inspired by how people leave traces in the physical world. While data visualizations are widely discussed in online communities and social media, comments tend to be displayed separately from the visualization and we lack ways to relate these discussions back to the content of the visualization, e.g., to situate comments, explain visual patterns, or question assumptions. In our visualization annotation interface, users can designate areas within the visualization. Discursive patinas are made of overlaid visual marks (anchors), attached to textual comments with category labels, likes, and replies. By coloring and styling the anchors, a meta visualization emerges, showing what and where people comment and annotate the visualization. These patinas show regions of heavy discussions, recent commenting activity, and the distribution of questions, suggestions, or personal stories. We ran workshops with 90 students, domain experts, and visualization researchers to study how people use anchors to discuss visualizations and how patinas influence people's understanding of the discussion. Our results show that discursive patinas improve the ability to navigate discussions and guide people to comments that help understand, contextualize, or scrutinize the visualization. We discuss the potential of anchors and patinas to support discursive engagements, including critical readings of visualizations, design feedback, and feminist approaches to data visualization.
本文介绍了一种将讨论可视化到数据可视化中的技术--"论述锈蚀"(discursive patinas),其灵感来自人们如何在物理世界中留下痕迹。虽然数据可视化在网络社区和社交媒体中被广泛讨论,但评论往往与可视化分开显示,我们缺乏将这些讨论与可视化内容联系起来的方法,例如,对评论进行定位、解释可视化模式或质疑假设。在我们的可视化注释界面中,用户可以在可视化中指定区域。讨论区是由叠加的视觉标记(锚点)组成的,并附有类别标签、点赞和回复等文字评论。通过对锚点进行着色和定型,元可视化就会出现,显示人们在哪些地方对可视化发表了评论和注释。这些铜绿显示了讨论较多的区域、最近的评论活动以及问题、建议或个人故事的分布情况。我们与90名学生、领域专家和可视化研究人员举办了研讨会,研究人们如何使用锚点来讨论可视化,以及锚点如何影响人们对讨论的理解。我们的研究结果表明,话语补丁提高了人们浏览讨论的能力,并引导人们发表有助于理解可视化、了解可视化背景或仔细研究可视化的评论。我们讨论了锚点和补丁在支持话语参与方面的潜力,包括对可视化的批判性解读、设计反馈以及数据可视化的女性主义方法。
{"title":"Discursive Patinas: Anchoring Discussions in Data Visualizations","authors":"Tobias Kauer;Derya Akbaba;Marian Dörk;Benjamin Bach","doi":"10.1109/TVCG.2024.3456334","DOIUrl":"10.1109/TVCG.2024.3456334","url":null,"abstract":"This paper presents discursive patinas, a technique to visualize discussions onto data visualizations, inspired by how people leave traces in the physical world. While data visualizations are widely discussed in online communities and social media, comments tend to be displayed separately from the visualization and we lack ways to relate these discussions back to the content of the visualization, e.g., to situate comments, explain visual patterns, or question assumptions. In our visualization annotation interface, users can designate areas within the visualization. Discursive patinas are made of overlaid visual marks (anchors), attached to textual comments with category labels, likes, and replies. By coloring and styling the anchors, a meta visualization emerges, showing what and where people comment and annotate the visualization. These patinas show regions of heavy discussions, recent commenting activity, and the distribution of questions, suggestions, or personal stories. We ran workshops with 90 students, domain experts, and visualization researchers to study how people use anchors to discuss visualizations and how patinas influence people's understanding of the discussion. Our results show that discursive patinas improve the ability to navigate discussions and guide people to comments that help understand, contextualize, or scrutinize the visualization. We discuss the potential of anchors and patinas to support discursive engagements, including critical readings of visualizations, design feedback, and feminist approaches to data visualization.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"1246-1256"},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeLVE into Earth's Past: A Visualization-Based Exhibit Deployed Across Multiple Museum Contexts DeLVE into Earth's Past:在多种博物馆背景下部署的可视化展览
Pub Date : 2024-09-13 DOI: 10.1109/TVCG.2024.3456174
Mara Solen;Nigar Sultana;Laura Lukes;Tamara Munzner
While previous work has found success in deploying visualizations as museum exhibits, it has not investigated whether museum context impacts visitor behaviour with these exhibits. We present an interactive Deep-time Literacy Visualization Exhibit (DeLVE) to help museum visitors understand deep time (lengths of extremely long geological processes) by improving proportional reasoning skills through comparison of different time periods. DeLVE uses a new visualization idiom, Connected Multi-Tier Ranges, to visualize curated datasets of past events across multiple scales of time, relating extreme scales with concrete scales that have more familiar magnitudes and units. Museum staff at three separate museums approved the deployment of DeLVE as a digital kiosk, and devoted time to curating a unique dataset in each of them. We collect data from two sources, an observational study and system trace logs. We discuss the importance of context: similar museum exhibits in different contexts were received very differently by visitors. We additionally discuss differences in our process from Sedlmair et al.'s design study methodology which is focused on design studies triggered by connection with collaborators rather than the discovery of a concept to communicate. Supplemental materials are available at: https://osf.io/z53dq/
虽然以前的工作已经成功地将可视化作为博物馆展品进行了部署,但还没有研究博物馆环境是否会影响参观者使用这些展品的行为。我们展示了一个交互式 "深层时间扫盲可视化展品"(DeLVE),通过比较不同的时间段来提高比例推理能力,从而帮助博物馆参观者理解深层时间(极长地质过程的长度)。DeLVE 使用一种新的可视化成语--"连接的多层范围",将过去事件的数据集在多个时间尺度上可视化,将极端尺度与具有更熟悉的量级和单位的具体尺度联系起来。三家不同博物馆的工作人员批准将 DeLVE 部署为数字信息亭,并投入时间在每家博物馆策划一个独特的数据集。我们从两个来源收集数据:观察研究和系统跟踪日志。我们讨论了情境的重要性:在不同情境下,游客对类似博物馆展品的接受程度大相径庭。此外,我们还讨论了我们的研究过程与 Sedlmair 等人的设计研究方法的不同之处,后者的研究重点是由与合作者的联系引发的设计研究,而不是发现要交流的概念。补充材料见:https://osf.io/z53dq/。
{"title":"DeLVE into Earth's Past: A Visualization-Based Exhibit Deployed Across Multiple Museum Contexts","authors":"Mara Solen;Nigar Sultana;Laura Lukes;Tamara Munzner","doi":"10.1109/TVCG.2024.3456174","DOIUrl":"10.1109/TVCG.2024.3456174","url":null,"abstract":"While previous work has found success in deploying visualizations as museum exhibits, it has not investigated whether museum context impacts visitor behaviour with these exhibits. We present an interactive Deep-time Literacy Visualization Exhibit (DeLVE) to help museum visitors understand deep time (lengths of extremely long geological processes) by improving proportional reasoning skills through comparison of different time periods. DeLVE uses a new visualization idiom, Connected Multi-Tier Ranges, to visualize curated datasets of past events across multiple scales of time, relating extreme scales with concrete scales that have more familiar magnitudes and units. Museum staff at three separate museums approved the deployment of DeLVE as a digital kiosk, and devoted time to curating a unique dataset in each of them. We collect data from two sources, an observational study and system trace logs. We discuss the importance of context: similar museum exhibits in different contexts were received very differently by visitors. We additionally discuss differences in our process from Sedlmair et al.'s design study methodology which is focused on design studies triggered by connection with collaborators rather than the discovery of a concept to communicate. Supplemental materials are available at: https://osf.io/z53dq/","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"952-961"},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SpreadLine: Visualizing Egocentric Dynamic Influence SpreadLine:以自我为中心的动态影响可视化
Pub Date : 2024-09-13 DOI: 10.1109/TVCG.2024.3456373
Yun-Hsin Kuo;Dongyu Liu;Kwan-Liu Ma
Egocentric networks, often visualized as node-link diagrams, portray the complex relationship (link) dynamics between an entity (node) and others. However, common analytics tasks are multifaceted, encompassing interactions among four key aspects: strength, function, structure, and content. Current node-link visualization designs may fall short, focusing narrowly on certain aspects and neglecting the holistic, dynamic nature of egocentric networks. To bridge this gap, we introduce SpreadLine, a novel visualization framework designed to enable the visual exploration of egocentric networks from these four aspects at the microscopic level. Leveraging the intuitive appeal of storyline visualizations, SpreadLine adopts a storyline-based design to represent entities and their evolving relationships. We further encode essential topological information in the layout and condense the contextual information in a metro map metaphor, allowing for a more engaging and effective way to explore temporal and attribute-based information. To guide our work, with a thorough review of pertinent literature, we have distilled a task taxonomy that addresses the analytical needs specific to egocentric network exploration. Acknowledging the diverse analytical requirements of users, SpreadLine offers customizable encodings to enable users to tailor the framework for their tasks. We demonstrate the efficacy and general applicability of SpreadLine through three diverse real-world case studies (disease surveillance, social media trends, and academic career evolution) and a usability study.
以自我为中心的网络通常可视化为节点-链接图,描绘实体(节点)与其他实体之间复杂的关系(链接)动态。然而,常见的分析任务是多方面的,包括强度、功能、结构和内容这四个关键方面的互动。目前的节点-链接可视化设计可能存在不足,只关注某些方面,而忽视了以自我为中心的网络的整体性和动态性。为了弥补这一不足,我们推出了新颖的可视化框架 SpreadLine,旨在从微观层面的这四个方面对以自我为中心的网络进行可视化探索。利用故事情节可视化的直观吸引力,SpreadLine 采用了基于故事情节的设计来表示实体及其不断变化的关系。我们在布局中进一步编码了重要的拓扑信息,并在地铁图隐喻中浓缩了上下文信息,从而以更吸引人、更有效的方式来探索基于时间和属性的信息。为了指导我们的工作,通过对相关文献的深入研究,我们提炼出了一种任务分类法,以满足以自我为中心的网络探索所特有的分析需求。由于用户的分析需求多种多样,SpreadLine 提供了可定制的编码,使用户能够根据自己的任务定制框架。我们通过三个不同的真实世界案例研究(疾病监测、社交媒体趋势和学术职业发展)和一项可用性研究,证明了 SpreadLine 的有效性和普遍适用性。
{"title":"SpreadLine: Visualizing Egocentric Dynamic Influence","authors":"Yun-Hsin Kuo;Dongyu Liu;Kwan-Liu Ma","doi":"10.1109/TVCG.2024.3456373","DOIUrl":"10.1109/TVCG.2024.3456373","url":null,"abstract":"Egocentric networks, often visualized as node-link diagrams, portray the complex relationship (link) dynamics between an entity (node) and others. However, common analytics tasks are multifaceted, encompassing interactions among four key aspects: strength, function, structure, and content. Current node-link visualization designs may fall short, focusing narrowly on certain aspects and neglecting the holistic, dynamic nature of egocentric networks. To bridge this gap, we introduce SpreadLine, a novel visualization framework designed to enable the visual exploration of egocentric networks from these four aspects at the microscopic level. Leveraging the intuitive appeal of storyline visualizations, SpreadLine adopts a storyline-based design to represent entities and their evolving relationships. We further encode essential topological information in the layout and condense the contextual information in a metro map metaphor, allowing for a more engaging and effective way to explore temporal and attribute-based information. To guide our work, with a thorough review of pertinent literature, we have distilled a task taxonomy that addresses the analytical needs specific to egocentric network exploration. Acknowledging the diverse analytical requirements of users, SpreadLine offers customizable encodings to enable users to tailor the framework for their tasks. We demonstrate the efficacy and general applicability of SpreadLine through three diverse real-world case studies (disease surveillance, social media trends, and academic career evolution) and a usability study.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"1050-1060"},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MobiTangibles: Enabling Physical Manipulation Experiences of Virtual Precision Hand-Held Tools' Miniature Control in VR MobiTangibles:在 VR 中实现虚拟精密手持工具微型控制的物理操纵体验
Pub Date : 2024-09-13 DOI: 10.1109/TVCG.2024.3456191
Abhijeet Mishra;Harshvardhan Singh;Aman Parnami;Jainendra Shukla
Realistic simulation for miniature control interactions, typically identified by precise and confined motions, commonly found in precision hand-held tools, like calipers, powered engravers, retractable knives, etc., are beneficial for skill training associated with these kinds of tools in virtual reality (VR) environments. However, existing approaches aiming to simulate hand-held tools' miniature control manipulation experiences in VR entail prototyping complexity and require expertise, posing challenges for novice users and individuals with limited resources. Addressing this challenge, we introduce MobiTangibles—proxies for precision hand-held tools' miniature control interactions utilizing smartphone-based magnetic field sensing. MobiTangibles passively replicate fundamental miniature control experiences associated with hand-held tools, such as single-axis translation and rotation, enabling quick and easy use for diverse VR scenarios without requiring extensive technical knowledge. We conducted a comprehensive technical evaluation to validate the functionality of MobiTangibles across diverse settings, including evaluations for electromagnetic interference within indoor environments. In a user-centric evaluation involving 15 participants across bare hands, VR controllers, and MobiTangibles conditions, we further assessed the quality of miniaturized manipulation experiences in VR. Our findings indicate that MobiTangibles outperformed conventional methods in realism and fatigue, receiving positive feedback.
精密手持工具(如卡尺、电动雕刻机、伸缩刀等)中常见的微型控制交互通常由精确和有限的运动来识别,对微型控制交互的真实模拟有利于在虚拟现实(VR)环境中进行与这类工具相关的技能培训。然而,现有的旨在模拟 VR 中手持工具微型控制操作体验的方法需要复杂的原型设计和专业知识,这给新手用户和资源有限的个人带来了挑战。为了应对这一挑战,我们推出了 MobiTangibles--利用基于智能手机的磁场感应来模拟精密手持工具的微型控制交互。MobiTangibles 可被动复制与手持工具相关的基本微型控制体验,例如单轴平移和旋转,从而无需丰富的技术知识即可快速、轻松地用于各种 VR 场景。我们进行了全面的技术评估,以验证 MobiTangibles 在不同环境下的功能,包括室内环境电磁干扰评估。在一项以用户为中心的评估中,我们进一步评估了在 VR 中微型操作体验的质量,共有 15 名参与者参与了徒手、VR 控制器和 MobiTangibles 的评估。我们的研究结果表明,MobiTangibles 在逼真度和疲劳度方面优于传统方法,获得了积极的反馈。
{"title":"MobiTangibles: Enabling Physical Manipulation Experiences of Virtual Precision Hand-Held Tools' Miniature Control in VR","authors":"Abhijeet Mishra;Harshvardhan Singh;Aman Parnami;Jainendra Shukla","doi":"10.1109/TVCG.2024.3456191","DOIUrl":"10.1109/TVCG.2024.3456191","url":null,"abstract":"Realistic simulation for miniature control interactions, typically identified by precise and confined motions, commonly found in precision hand-held tools, like calipers, powered engravers, retractable knives, etc., are beneficial for skill training associated with these kinds of tools in virtual reality (VR) environments. However, existing approaches aiming to simulate hand-held tools' miniature control manipulation experiences in VR entail prototyping complexity and require expertise, posing challenges for novice users and individuals with limited resources. Addressing this challenge, we introduce MobiTangibles—proxies for precision hand-held tools' miniature control interactions utilizing smartphone-based magnetic field sensing. MobiTangibles passively replicate fundamental miniature control experiences associated with hand-held tools, such as single-axis translation and rotation, enabling quick and easy use for diverse VR scenarios without requiring extensive technical knowledge. We conducted a comprehensive technical evaluation to validate the functionality of MobiTangibles across diverse settings, including evaluations for electromagnetic interference within indoor environments. In a user-centric evaluation involving 15 participants across bare hands, VR controllers, and MobiTangibles conditions, we further assessed the quality of miniaturized manipulation experiences in VR. Our findings indicate that MobiTangibles outperformed conventional methods in realism and fatigue, receiving positive feedback.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"7321-7331"},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Avatars to Agents: Self-Related Cues Through Embodiment and Personalization Affect Body Perception in Virtual Reality 从头像到代理:通过体现和个性化的自我相关线索影响虚拟现实中的身体感知
Pub Date : 2024-09-13 DOI: 10.1109/TVCG.2024.3456211
Marie Luisa Fielder;Erik Wolf;Nina Döllinger;David Mal;Mario Botsch;Marc Erich Latoschik;Carolin Wienrich
Our work investigates the influence of self-related cues in the design of virtual humans on body perception in virtual reality. In a $2times 2$ mixed design, 64 participants faced photorealistic virtual humans either as a motion-synchronized embodied avatar or as an autonomous moving agent, appearing subsequently with a personalized and generic texture. Our results unveil that self-related cues through embodiment and personalization yield an individual and complemented increase in participants' sense of embodiment and self-identification towards the virtual human. Different body weight modification and estimation tasks further showed an impact of both factors on participants' body weight perception. Additional analyses revealed that the participant's body mass index predicted body weight estimations in all conditions and that participants' self-esteem and body shape concerns correlated with different body weight perception results. Hence, we have demonstrated the occurrence of double standards through induced self-related cues in virtual human perception, especially through embodiment.
我们的研究调查了虚拟人设计中自我相关线索对虚拟现实中身体感知的影响。在一个2美元乘2美元的混合设计中,64名参与者面对的是逼真的虚拟人,要么是运动同步的化身,要么是自主移动的代理,随后出现的是个性化和通用的纹理。我们的研究结果表明,通过化身和个性化的自我相关线索,参与者对虚拟人的化身感和自我认同感得到了个性化和互补性的提高。不同的体重修改和估计任务进一步显示了这两个因素对参与者体重感知的影响。其他分析表明,在所有条件下,参与者的体重指数都能预测体重估算结果,而且参与者的自尊心和对体形的关注与不同的体重感知结果相关。因此,我们证明了在虚拟人感知中通过诱导自我相关线索,特别是通过体现,会出现双重标准。
{"title":"From Avatars to Agents: Self-Related Cues Through Embodiment and Personalization Affect Body Perception in Virtual Reality","authors":"Marie Luisa Fielder;Erik Wolf;Nina Döllinger;David Mal;Mario Botsch;Marc Erich Latoschik;Carolin Wienrich","doi":"10.1109/TVCG.2024.3456211","DOIUrl":"10.1109/TVCG.2024.3456211","url":null,"abstract":"Our work investigates the influence of self-related cues in the design of virtual humans on body perception in virtual reality. In a $2times 2$ mixed design, 64 participants faced photorealistic virtual humans either as a motion-synchronized embodied avatar or as an autonomous moving agent, appearing subsequently with a personalized and generic texture. Our results unveil that self-related cues through embodiment and personalization yield an individual and complemented increase in participants' sense of embodiment and self-identification towards the virtual human. Different body weight modification and estimation tasks further showed an impact of both factors on participants' body weight perception. Additional analyses revealed that the participant's body mass index predicted body weight estimations in all conditions and that participants' self-esteem and body shape concerns correlated with different body weight perception results. Hence, we have demonstrated the occurrence of double standards through induced self-related cues in virtual human perception, especially through embodiment.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"7386-7396"},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10680193","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
User Experience of Visualizations in Motion: A Case Study and Design Considerations 运动中可视化的用户体验:案例研究与设计考虑因素
Pub Date : 2024-09-13 DOI: 10.1109/TVCG.2024.3456319
Lijie Yao;Federica Bucchieri;Victoria McArthur;Anastasia Bezerianos;Petra Isenberg
We present a systematic review, an empirical study, and a first set of considerations for designing visualizations in motion, derived from a concrete scenario in which these visualizations were used to support a primary task. In practice, when viewers are confronted with embedded visualizations, they often have to focus on a primary task and can only quickly glance at a visualization showing rich, often dynamically updated, information. As such, the visualizations must be designed so as not to distract from the primary task, while at the same time being readable and useful for aiding the primary task. For example, in games, players who are engaged in a battle have to look at their enemies but also read the remaining health of their own game character from the health bar over their character's head. Many trade-ofts are possible in the design of embedded visualizations in such dynamic scenarios, which we explore in-depth in this paper with a focus on user experience. We use video games as an example of an application context with a rich existing set of visualizations in motion. We begin our work with a systematic review of in-game visualizations in motion. Next, we conduct an empirical user study to investigate how different embedded visualizations in motion designs impact user experience. We conclude with a set of considerations and trade-offs for designing visualizations in motion more broadly as derived from what we learned about video games. All supplemental materials of this paper are available at osf.io/3v8wm/.
我们介绍了一篇系统性综述、一项实证研究,以及设计动态可视化的第一套考虑因素,这些考虑因素来自于可视化被用于支持一项主要任务的具体场景。在实践中,当观众面对嵌入式可视化内容时,他们往往必须专注于一项主要任务,只能快速浏览显示丰富信息(通常是动态更新的信息)的可视化内容。因此,可视化的设计必须避免分散主要任务的注意力,同时还要具有可读性和实用性,以便为主要任务提供帮助。例如,在游戏中,参与战斗的玩家既要观察敌人,又要从角色头顶的健康条上读取自己游戏角色的剩余生命值。在这种动态场景中,嵌入式可视化的设计可能会有许多取舍,我们将在本文中以用户体验为重点进行深入探讨。我们以视频游戏为例,说明在运动可视化方面现有的丰富应用。我们首先对游戏中的动态可视化进行了系统回顾。接下来,我们进行了一项实证用户研究,调查不同的嵌入式运动可视化设计如何影响用户体验。最后,我们将从电子游戏中总结出一套更广泛的运动可视化设计注意事项和权衡方法。本文的所有补充材料可在osf.io/3v8wm/获取。
{"title":"User Experience of Visualizations in Motion: A Case Study and Design Considerations","authors":"Lijie Yao;Federica Bucchieri;Victoria McArthur;Anastasia Bezerianos;Petra Isenberg","doi":"10.1109/TVCG.2024.3456319","DOIUrl":"10.1109/TVCG.2024.3456319","url":null,"abstract":"We present a systematic review, an empirical study, and a first set of considerations for designing visualizations in motion, derived from a concrete scenario in which these visualizations were used to support a primary task. In practice, when viewers are confronted with embedded visualizations, they often have to focus on a primary task and can only quickly glance at a visualization showing rich, often dynamically updated, information. As such, the visualizations must be designed so as not to distract from the primary task, while at the same time being readable and useful for aiding the primary task. For example, in games, players who are engaged in a battle have to look at their enemies but also read the remaining health of their own game character from the health bar over their character's head. Many trade-ofts are possible in the design of embedded visualizations in such dynamic scenarios, which we explore in-depth in this paper with a focus on user experience. We use video games as an example of an application context with a rich existing set of visualizations in motion. We begin our work with a systematic review of in-game visualizations in motion. Next, we conduct an empirical user study to investigate how different embedded visualizations in motion designs impact user experience. We conclude with a set of considerations and trade-offs for designing visualizations in motion more broadly as derived from what we learned about video games. All supplemental materials of this paper are available at osf.io/3v8wm/.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"174-184"},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Good (Or Bad) Are LLMs at Detecting Misleading Visualizations? LLM 在检测误导性可视化方面有多强(或多弱)?
Pub Date : 2024-09-12 DOI: 10.1109/TVCG.2024.3456333
Leo Yu-Ho Lo;Huamin Qu
In this study, we address the growing issue of misleading charts, a prevalent problem that undermines the integrity of information dissemination. Misleading charts can distort the viewer's perception of data, leading to misinterpretations and decisions based on false information. The development of effective automatic detection methods for misleading charts is an urgent field of research. The recent advancement of multimodal Large Language Models (LLMs) has introduced a promising direction for addressing this challenge. We explored the capabilities of these models in analyzing complex charts and assessing the impact of different prompting strategies on the models' analyses. We utilized a dataset of misleading charts collected from the internet by prior research and crafted nine distinct prompts, ranging from simple to complex, to test the ability of four different multimodal LLMs in detecting over 21 different chart issues. Through three experiments–from initial exploration to detailed analysis–we progressively gained insights into how to effectively prompt LLMs to identify misleading charts and developed strategies to address the scalability challenges encountered as we expanded our detection range from the initial five issues to 21 issues in the final experiment. Our findings reveal that multimodal LLMs possess a strong capability for chart comprehension and critical thinking in data interpretation. There is significant potential in employing multimodal LLMs to counter misleading information by supporting critical thinking and enhancing visualization literacy. This study demonstrates the applicability of LLMs in addressing the pressing concern of misleading charts.
在本研究中,我们探讨了误导性图表这一日益严重的问题,这是一个破坏信息传播完整性的普遍问题。误导性图表会扭曲浏览者对数据的感知,导致基于错误信息的误读和决策。开发有效的误导性图表自动检测方法是一个亟待解决的研究领域。最近,多模态大语言模型(LLM)的发展为应对这一挑战提供了一个很有前景的方向。我们探索了这些模型分析复杂图表的能力,并评估了不同提示策略对模型分析的影响。我们利用之前的研究从互联网上收集的误导性图表数据集,精心设计了九种不同的提示,从简单到复杂,测试了四种不同的多模态 LLM 检测超过 21 种不同图表问题的能力。通过三次实验--从最初的探索到详细的分析--我们逐步深入了解了如何有效地提示 LLM 识别误导性图表,并在最后一次实验中将检测范围从最初的 5 个问题扩大到 21 个问题时,制定了应对可扩展性挑战的策略。我们的研究结果表明,多模态 LLM 具备很强的图表理解能力和数据解读的批判性思维能力。通过支持批判性思维和提高可视化素养,采用多模态 LLM 来抵制误导性信息具有巨大的潜力。这项研究证明了 LLMs 在解决误导性图表这一紧迫问题方面的适用性。
{"title":"How Good (Or Bad) Are LLMs at Detecting Misleading Visualizations?","authors":"Leo Yu-Ho Lo;Huamin Qu","doi":"10.1109/TVCG.2024.3456333","DOIUrl":"10.1109/TVCG.2024.3456333","url":null,"abstract":"In this study, we address the growing issue of misleading charts, a prevalent problem that undermines the integrity of information dissemination. Misleading charts can distort the viewer's perception of data, leading to misinterpretations and decisions based on false information. The development of effective automatic detection methods for misleading charts is an urgent field of research. The recent advancement of multimodal Large Language Models (LLMs) has introduced a promising direction for addressing this challenge. We explored the capabilities of these models in analyzing complex charts and assessing the impact of different prompting strategies on the models' analyses. We utilized a dataset of misleading charts collected from the internet by prior research and crafted nine distinct prompts, ranging from simple to complex, to test the ability of four different multimodal LLMs in detecting over 21 different chart issues. Through three experiments–from initial exploration to detailed analysis–we progressively gained insights into how to effectively prompt LLMs to identify misleading charts and developed strategies to address the scalability challenges encountered as we expanded our detection range from the initial five issues to 21 issues in the final experiment. Our findings reveal that multimodal LLMs possess a strong capability for chart comprehension and critical thinking in data interpretation. There is significant potential in employing multimodal LLMs to counter misleading information by supporting critical thinking and enhancing visualization literacy. This study demonstrates the applicability of LLMs in addressing the pressing concern of misleading charts.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"1116-1125"},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1