首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
Iguanodon: A Code-Breaking Game for Improving Visualization Construction Literacy. Iguanodon:提高可视化建设素养的解码游戏。
Pub Date : 2024-09-27 DOI: 10.1109/TVCG.2024.3468948
Patrick Adelberger, Oleg Lesota, Klaus Eckelt, Markus Schedl, Marc Streit

In today's data-rich environment, visualization literacy-the ability to understand and communicate information through charts-is increasingly important. However, constructing effective charts can be challenging due to the numerous design choices involved. Off-the-shelf systems and libraries produce charts with carefully selected defaults that users may not be aware of, making it hard to increase their visualization literacy with those systems. In addition, traditional ways of improving visualization literacy, such as textbooks and tutorials, can be burdensome as they require sifting through a plethora of resources. To address this challenge, we designed Iguanodon, an easy-to-use game application that complements the traditional methods of improving visualization construction literacy. In our game application, users interactively choose whether to apply design choices, which we assign to sub-tasks that must be optimized to create an effective chart. The application offers multiple game variations to help users learn how different design choices should be applied to construct effective charts. Furthermore, our approach easily adapts to different visualization design guidelines. We describe the application's design and present the results of a user study with 37 participants. Our findings indicate that our game-based approach supports users in improving their visualization literacy.

在当今数据丰富的环境中,可视化素养--通过图表理解和交流信息的能力--变得越来越重要。然而,由于涉及众多设计选择,构建有效的图表可能极具挑战性。现成的系统和图库在制作图表时都会精心选择用户可能不知道的默认值,因此很难通过这些系统提高用户的可视化素养。此外,提高可视化素养的传统方法,如教科书和教程,也会因为需要筛选大量资源而造成负担。为了应对这一挑战,我们设计了一款简单易用的游戏应用程序 "Iguanodon",作为提高可视化构建素养传统方法的补充。在我们的游戏应用程序中,用户交互式地选择是否应用设计选项,我们将这些选项分配给必须优化才能创建有效图表的子任务。该应用程序提供多种游戏变化,帮助用户学习如何应用不同的设计选择来构建有效的图表。此外,我们的方法还能轻松适应不同的可视化设计准则。我们介绍了该应用程序的设计,并展示了对 37 名参与者进行的用户研究的结果。研究结果表明,我们基于游戏的方法有助于用户提高可视化素养。
{"title":"Iguanodon: A Code-Breaking Game for Improving Visualization Construction Literacy.","authors":"Patrick Adelberger, Oleg Lesota, Klaus Eckelt, Markus Schedl, Marc Streit","doi":"10.1109/TVCG.2024.3468948","DOIUrl":"10.1109/TVCG.2024.3468948","url":null,"abstract":"<p><p>In today's data-rich environment, visualization literacy-the ability to understand and communicate information through charts-is increasingly important. However, constructing effective charts can be challenging due to the numerous design choices involved. Off-the-shelf systems and libraries produce charts with carefully selected defaults that users may not be aware of, making it hard to increase their visualization literacy with those systems. In addition, traditional ways of improving visualization literacy, such as textbooks and tutorials, can be burdensome as they require sifting through a plethora of resources. To address this challenge, we designed Iguanodon, an easy-to-use game application that complements the traditional methods of improving visualization construction literacy. In our game application, users interactively choose whether to apply design choices, which we assign to sub-tasks that must be optimized to create an effective chart. The application offers multiple game variations to help users learn how different design choices should be applied to construct effective charts. Furthermore, our approach easily adapts to different visualization design guidelines. We describe the application's design and present the results of a user study with 37 participants. Our findings indicate that our game-based approach supports users in improving their visualization literacy.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Field of View Restriction and Snap Turning as Cybersickness Mitigation Tools. 视场限制和急转弯作为缓解晕机的工具。
Pub Date : 2024-09-27 DOI: 10.1109/TVCG.2024.3470214
Jonathan W Kelly, Taylor A Doty, Stephen B Gilbert, Michael C Dorneich

Multiple tools are available to reduce cybersickness (sickness caused by virtual reality), but past research has not investigated the combined effects of multiple mitigation tools. Field of view (FOV) restriction limits peripheral vision during self-motion, and ample evidence supports its effectiveness for reducing cybersickness. Snap turning involves discrete rotations of the user's perspective without presenting intermediate views, although reports on its effectiveness at reducing cybersickness are limited and equivocal. Both mitigation tools reduce the visual motion that can cause cybersickness. The current study (N = 201) investigated the individual and combined effects of FOV restriction and snap turning on cybersickness when playing a consumer virtual reality game. FOV restriction and snap turning in isolation reduced cybersickness compared to a control condition without mitigation tools. Yet, the combination of FOV restriction and snap turning did not further reduce cybersickness beyond the individual tools in isolation, and in some cases the combination of tools led to cybersickness similar to that in the no mitigation control. These results indicate that caution is warranted when combining multiple cybersickness mitigation tools, which can interact in unexpected ways.

目前有多种工具可用于减轻网络晕眩(由虚拟现实引起的晕眩),但过去的研究并未对多种缓解工具的综合效果进行调查。视场角(FOV)限制可在自我移动过程中限制外围视线,大量证据表明该方法可有效减轻晕眩感。快转是指在不显示中间视图的情况下,对用户视角进行离散旋转,但有关其减轻晕机效果的报告有限,而且模棱两可。这两种缓解工具都能减少可能导致晕机的视觉运动。目前的研究(N = 201)调查了在玩一款消费类虚拟现实游戏时,视场角限制和快速转动对晕机的单独和综合影响。与不使用缓解工具的对照组相比,单独使用视场角限制和快速转动可减轻晕眩感。然而,结合使用视场角限制和急转弯并不能进一步减轻晕眩感,甚至在某些情况下,结合使用这两种工具所产生的晕眩感与不使用缓解工具的对照组相似。这些结果表明,在结合使用多种缓解晕机的工具时需要谨慎,因为这些工具可能会以意想不到的方式相互作用。
{"title":"Field of View Restriction and Snap Turning as Cybersickness Mitigation Tools.","authors":"Jonathan W Kelly, Taylor A Doty, Stephen B Gilbert, Michael C Dorneich","doi":"10.1109/TVCG.2024.3470214","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3470214","url":null,"abstract":"<p><p>Multiple tools are available to reduce cybersickness (sickness caused by virtual reality), but past research has not investigated the combined effects of multiple mitigation tools. Field of view (FOV) restriction limits peripheral vision during self-motion, and ample evidence supports its effectiveness for reducing cybersickness. Snap turning involves discrete rotations of the user's perspective without presenting intermediate views, although reports on its effectiveness at reducing cybersickness are limited and equivocal. Both mitigation tools reduce the visual motion that can cause cybersickness. The current study (N = 201) investigated the individual and combined effects of FOV restriction and snap turning on cybersickness when playing a consumer virtual reality game. FOV restriction and snap turning in isolation reduced cybersickness compared to a control condition without mitigation tools. Yet, the combination of FOV restriction and snap turning did not further reduce cybersickness beyond the individual tools in isolation, and in some cases the combination of tools led to cybersickness similar to that in the no mitigation control. These results indicate that caution is warranted when combining multiple cybersickness mitigation tools, which can interact in unexpected ways.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Simulation-based Approach for Quantifying the Impact of Interactive Label Correction for Machine Learning. 基于仿真的方法,量化交互式标签校正对机器学习的影响。
Pub Date : 2024-09-26 DOI: 10.1109/TVCG.2024.3468352
Yixuan Wang, Jieqiong Zhao, Jiayi Hong, Ronald G Askin, Ross Maciejewski

Recent years have witnessed growing interest in understanding the sensitivity of machine learning to training data characteristics. While researchers have claimed the benefits of activities such as a human-in-the-loop approach of interactive label correction for improving model performance, there have been limited studies to quantitatively probe the relationship between the cost of label correction and the associated benefit in model performance. We employ a simulation-based approach to explore the efficacy of label correction under diverse task conditions, namely different datasets, noise properties, and machine learning algorithms. We measure the impact of label correction on model performance under the best-case scenario assumption: perfect correction (perfect human and visual systems), serving as an upper-bound estimation of the benefits derived from visual interactive label correction. The simulation results reveal a trade-off between the label correction effort expended and model performance improvement. Notably, task conditions play a crucial role in shaping the trade-off. Based on the simulation results, we develop a set of recommendations to help practitioners determine conditions under which interactive label correction is an effective mechanism for improving model performance.

近年来,人们越来越关注了解机器学习对训练数据特征的敏感性。虽然研究人员声称人在回路中的交互式标签校正方法等活动对提高模型性能有好处,但对标签校正成本与模型性能相关收益之间的关系进行定量探究的研究还很有限。我们采用了一种基于模拟的方法来探索标签校正在不同任务条件下的功效,即不同的数据集、噪声属性和机器学习算法。我们测量了在最佳情况假设下标签校正对模型性能的影响:完美校正(完美的人类和视觉系统),作为对视觉交互式标签校正所带来的好处的上限估计。模拟结果揭示了标签校正工作量与模型性能改善之间的权衡。值得注意的是,任务条件在权衡中起着至关重要的作用。基于模拟结果,我们提出了一系列建议,以帮助实践者确定在哪些条件下交互式标签校正是提高模型性能的有效机制。
{"title":"A Simulation-based Approach for Quantifying the Impact of Interactive Label Correction for Machine Learning.","authors":"Yixuan Wang, Jieqiong Zhao, Jiayi Hong, Ronald G Askin, Ross Maciejewski","doi":"10.1109/TVCG.2024.3468352","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3468352","url":null,"abstract":"<p><p>Recent years have witnessed growing interest in understanding the sensitivity of machine learning to training data characteristics. While researchers have claimed the benefits of activities such as a human-in-the-loop approach of interactive label correction for improving model performance, there have been limited studies to quantitatively probe the relationship between the cost of label correction and the associated benefit in model performance. We employ a simulation-based approach to explore the efficacy of label correction under diverse task conditions, namely different datasets, noise properties, and machine learning algorithms. We measure the impact of label correction on model performance under the best-case scenario assumption: perfect correction (perfect human and visual systems), serving as an upper-bound estimation of the benefits derived from visual interactive label correction. The simulation results reveal a trade-off between the label correction effort expended and model performance improvement. Notably, task conditions play a crucial role in shaping the trade-off. Based on the simulation results, we develop a set of recommendations to help practitioners determine conditions under which interactive label correction is an effective mechanism for improving model performance.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comprehensive Evaluation of Arbitrary Image Style Transfer Methods. 全面评估任意图像风格转换方法
Pub Date : 2024-09-25 DOI: 10.1109/TVCG.2024.3466964
Zijun Zhou, Fan Tang, Yuxin Zhang, Oliver Deussen, Juan Cao, Weiming Dong, Xiangtao Li, Tong-Yee Lee

Despite the remarkable process in the field of arbitrary image style transfer (AST), inconsistent evaluation continues to plague style transfer research. Existing methods often suffer from limited objective evaluation and inconsistent subjective feedback, hindering reliable comparisons among AST variants. In this study, we propose a multi-granularity assessment system that combines standardized objective and subjective evaluations. We collect a fine-grained dataset considering a range of image contexts such as different scenes, object complexities, and rich parsing information from multiple sources. Objective and subjective studies are conducted using the collected dataset. Specifically, we innovate on traditional subjective studies by developing an online evaluation system utilizing a combination of point-wise, pair-wise, and group-wise questionnaires. Finally, we bridge the gap between objective and subjective evaluations by examining the consistency between the results from the two studies. We experimentally evaluate CNN-based, flow-based, transformer-based, and diffusion-based AST methods by the proposed multi-granularity assessment system, which lays the foundation for a reliable and robust evaluation. Providing standardized measures, objective data, and detailed subjective feedback empowers researchers to make informed comparisons and drive innovation in this rapidly evolving field. Finally, for the collected dataset and our online evaluation system, please see http://ivc.ia.ac.cn.

尽管任意图像风格转换(AST)领域取得了令人瞩目的进展,但不一致的评估仍然困扰着风格转换研究。现有的方法往往存在客观评价有限和主观反馈不一致的问题,阻碍了对 AST 变体进行可靠的比较。在本研究中,我们提出了一种结合标准化客观评价和主观评价的多粒度评估系统。我们收集了一个细粒度数据集,其中考虑到了一系列图像上下文,如不同场景、对象复杂性以及来自多个来源的丰富解析信息。利用收集到的数据集进行客观和主观研究。具体来说,我们在传统主观研究的基础上进行了创新,开发了一个在线评估系统,综合利用了点式、对式和组式问卷。最后,我们通过检验两项研究结果的一致性,缩小了客观评价与主观评价之间的差距。我们通过所提出的多粒度评估系统对基于 CNN、基于流量、基于变压器和基于扩散的 AST 方法进行了实验性评估,为可靠、稳健的评估奠定了基础。通过提供标准化的测量方法、客观数据和详细的主观反馈,研究人员可以进行有依据的比较,并推动这一快速发展领域的创新。最后,有关收集的数据集和我们的在线评估系统,请参见 http://ivc.ia.ac.cn。
{"title":"A Comprehensive Evaluation of Arbitrary Image Style Transfer Methods.","authors":"Zijun Zhou, Fan Tang, Yuxin Zhang, Oliver Deussen, Juan Cao, Weiming Dong, Xiangtao Li, Tong-Yee Lee","doi":"10.1109/TVCG.2024.3466964","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3466964","url":null,"abstract":"<p><p>Despite the remarkable process in the field of arbitrary image style transfer (AST), inconsistent evaluation continues to plague style transfer research. Existing methods often suffer from limited objective evaluation and inconsistent subjective feedback, hindering reliable comparisons among AST variants. In this study, we propose a multi-granularity assessment system that combines standardized objective and subjective evaluations. We collect a fine-grained dataset considering a range of image contexts such as different scenes, object complexities, and rich parsing information from multiple sources. Objective and subjective studies are conducted using the collected dataset. Specifically, we innovate on traditional subjective studies by developing an online evaluation system utilizing a combination of point-wise, pair-wise, and group-wise questionnaires. Finally, we bridge the gap between objective and subjective evaluations by examining the consistency between the results from the two studies. We experimentally evaluate CNN-based, flow-based, transformer-based, and diffusion-based AST methods by the proposed multi-granularity assessment system, which lays the foundation for a reliable and robust evaluation. Providing standardized measures, objective data, and detailed subjective feedback empowers researchers to make informed comparisons and drive innovation in this rapidly evolving field. Finally, for the collected dataset and our online evaluation system, please see http://ivc.ia.ac.cn.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets. PhenoFlow:用于探索大型复杂卒中数据集的人工-LLM驱动可视分析系统。
Pub Date : 2024-09-24 DOI: 10.1109/TVCG.2024.3456215
Jaeyoung Kim, Sihyeon Lee, Hyeon Jeon, Keon-Joo Lee, Hee-Joon Bae, Bohyoung Kim, Jinwook Seo

Acute stroke demands prompt diagnosis and treatment to achieve optimal patient outcomes. However, the intricate and irregular nature of clinical data associated with acute stroke, particularly blood pressure (BP) measurements, presents substantial obstacles to effective visual analytics and decision-making. Through a year-long collaboration with experienced neurologists, we developed PhenoFlow, a visual analytics system that leverages the collaboration between human and Large Language Models (LLMs) to analyze the extensive and complex data of acute ischemic stroke patients. PhenoFlow pioneers an innovative workflow, where the LLM serves as a data wrangler while neurologists explore and supervise the output using visualizations and natural language interactions. This approach enables neurologists to focus more on decision-making with reduced cognitive load. To protect sensitive patient information, PhenoFlow only utilizes metadata to make inferences and synthesize executable codes, without accessing raw patient data. This ensures that the results are both reproducible and interpretable while maintaining patient privacy. The system incorporates a slice-and-wrap design that employs temporal folding to create an overlaid circular visualization. Combined with a linear bar graph, this design aids in exploring meaningful patterns within irregularly measured BP data. Through case studies, PhenoFlow has demonstrated its capability to support iterative analysis of extensive clinical datasets, reducing cognitive load and enabling neurologists to make well-informed decisions. Grounded in long-term collaboration with domain experts, our research demonstrates the potential of utilizing LLMs to tackle current challenges in data-driven clinical decision-making for acute ischemic stroke patients.

急性中风需要及时诊断和治疗,以实现最佳的患者预后。然而,与急性中风相关的临床数据,尤其是血压(BP)测量数据错综复杂且不规则,给有效的可视化分析和决策带来了巨大障碍。通过与经验丰富的神经科医生长达一年的合作,我们开发出了 PhenoFlow,这是一种可视化分析系统,利用人类与大型语言模型(LLMs)之间的协作来分析急性缺血性中风患者的大量复杂数据。PhenoFlow 首创了一种创新的工作流程,即 LLM 充当数据处理员,而神经学家则利用可视化和自然语言交互来探索和监督输出结果。这种方法能让神经科医生更专注于决策,减少认知负荷。为了保护敏感的患者信息,PhenoFlow 只利用元数据进行推断和合成可执行代码,而不访问原始患者数据。这确保了结果的可重复性和可解释性,同时维护了患者隐私。该系统采用了切片和缠绕设计,利用时间折叠来创建叠加的圆形可视化效果。这种设计与线性条形图相结合,有助于在不规则的血压测量数据中探索有意义的模式。通过案例研究,PhenoFlow 已证明其有能力支持对大量临床数据集进行迭代分析,减少认知负荷,使神经科医生能够做出明智的决策。在与领域专家长期合作的基础上,我们的研究证明了利用 LLMs 应对当前急性缺血性中风患者数据驱动临床决策挑战的潜力。
{"title":"PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets.","authors":"Jaeyoung Kim, Sihyeon Lee, Hyeon Jeon, Keon-Joo Lee, Hee-Joon Bae, Bohyoung Kim, Jinwook Seo","doi":"10.1109/TVCG.2024.3456215","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3456215","url":null,"abstract":"<p><p>Acute stroke demands prompt diagnosis and treatment to achieve optimal patient outcomes. However, the intricate and irregular nature of clinical data associated with acute stroke, particularly blood pressure (BP) measurements, presents substantial obstacles to effective visual analytics and decision-making. Through a year-long collaboration with experienced neurologists, we developed PhenoFlow, a visual analytics system that leverages the collaboration between human and Large Language Models (LLMs) to analyze the extensive and complex data of acute ischemic stroke patients. PhenoFlow pioneers an innovative workflow, where the LLM serves as a data wrangler while neurologists explore and supervise the output using visualizations and natural language interactions. This approach enables neurologists to focus more on decision-making with reduced cognitive load. To protect sensitive patient information, PhenoFlow only utilizes metadata to make inferences and synthesize executable codes, without accessing raw patient data. This ensures that the results are both reproducible and interpretable while maintaining patient privacy. The system incorporates a slice-and-wrap design that employs temporal folding to create an overlaid circular visualization. Combined with a linear bar graph, this design aids in exploring meaningful patterns within irregularly measured BP data. Through case studies, PhenoFlow has demonstrated its capability to support iterative analysis of extensive clinical datasets, reducing cognitive load and enabling neurologists to make well-informed decisions. Grounded in long-term collaboration with domain experts, our research demonstrates the potential of utilizing LLMs to tackle current challenges in data-driven clinical decision-making for acute ischemic stroke patients.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-based Synthetic Lethal Prediction. SLInterpreter:基于 GNN 的合成致命性预测的探索性和迭代式人机协作系统。
Pub Date : 2024-09-24 DOI: 10.1109/TVCG.2024.3456325
Haoran Jiang, Shaohan Shi, Shuhao Zhang, Jie Zheng, Quan Li

Synthetic Lethal (SL) relationships, though rare among the vast array of gene combinations, hold substantial promise for targeted cancer therapy. Despite advancements in AI model accuracy, there is still a significant need among domain experts for interpretive paths and mechanism explorations that align better with domain-specific knowledge, particularly due to the high costs of experimentation. To address this gap, we propose an iterative Human-AI collaborative framework with two key components: 1) HumanEngaged Knowledge Graph Refinement based on Metapath Strategies, which leverages insights from interpretive paths and domain expertise to refine the knowledge graph through metapath strategies with appropriate granularity. 2) Cross-Granularity SL Interpretation Enhancement and Mechanism Analysis, which aids experts in organizing and comparing predictions and interpretive paths across different granularities, uncovering new SL relationships, enhancing result interpretation, and elucidating potential mechanisms inferred by Graph Neural Network (GNN) models. These components cyclically optimize model predictions and mechanism explorations, enhancing expert involvement and intervention to build trust. Facilitated by SLInterpreter, this framework ensures that newly generated interpretive paths increasingly align with domain knowledge and adhere more closely to real-world biological principles through iterative Human-AI collaboration. We evaluate the framework's efficacy through a case study and expert interviews.

合成致命(SL)关系虽然在大量基因组合中非常罕见,但却为癌症靶向治疗带来了巨大希望。尽管人工智能模型的准确性有所提高,但领域专家仍然非常需要与特定领域知识更加一致的解释路径和机制探索,特别是由于实验成本高昂。为了弥补这一差距,我们提出了一个迭代式人机协作框架,其中包含两个关键组成部分:1) 基于元路径策略的人类参与的知识图谱完善,利用从解释路径和领域专业知识中获得的洞察力,通过具有适当粒度的元路径策略完善知识图谱。2)跨粒度 SL 解释增强和机制分析,帮助专家组织和比较不同粒度的预测和解释路径,发现新的 SL 关系,增强结果解释,并阐明图神经网络(GNN)模型推断出的潜在机制。这些组件循环优化模型预测和机制探索,加强专家参与和干预以建立信任。在 SLInterpreter 的协助下,该框架确保新生成的解释路径越来越符合领域知识,并通过迭代式人机协作更贴近真实世界的生物学原理。我们通过案例研究和专家访谈来评估该框架的功效。
{"title":"SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-based Synthetic Lethal Prediction.","authors":"Haoran Jiang, Shaohan Shi, Shuhao Zhang, Jie Zheng, Quan Li","doi":"10.1109/TVCG.2024.3456325","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3456325","url":null,"abstract":"<p><p>Synthetic Lethal (SL) relationships, though rare among the vast array of gene combinations, hold substantial promise for targeted cancer therapy. Despite advancements in AI model accuracy, there is still a significant need among domain experts for interpretive paths and mechanism explorations that align better with domain-specific knowledge, particularly due to the high costs of experimentation. To address this gap, we propose an iterative Human-AI collaborative framework with two key components: 1) HumanEngaged Knowledge Graph Refinement based on Metapath Strategies, which leverages insights from interpretive paths and domain expertise to refine the knowledge graph through metapath strategies with appropriate granularity. 2) Cross-Granularity SL Interpretation Enhancement and Mechanism Analysis, which aids experts in organizing and comparing predictions and interpretive paths across different granularities, uncovering new SL relationships, enhancing result interpretation, and elucidating potential mechanisms inferred by Graph Neural Network (GNN) models. These components cyclically optimize model predictions and mechanism explorations, enhancing expert involvement and intervention to build trust. Facilitated by SLInterpreter, this framework ensures that newly generated interpretive paths increasingly align with domain knowledge and adhere more closely to real-world biological principles through iterative Human-AI collaboration. We evaluate the framework's efficacy through a case study and expert interviews.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks. 循环:利用出处和可视化支持笔记本中的探索性数据分析。
Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3456186
Klaus Eckelt, Kiran Gadhave, Alexander Lex, Marc Streit

Exploratory data science is an iterative process of obtaining, cleaning, profiling, analyzing, and interpreting data. This cyclical way of working creates challenges within the linear structure of computational notebooks, leading to issues with code quality, recall, and reproducibility. To remedy this, we present Loops, a set of visual support techniques for iterative and exploratory data analysis in computational notebooks. Loops leverages provenance information to visualize the impact of changes made within a notebook. In visualizations of the notebook provenance, we trace the evolution of the notebook over time and highlight differences between versions. Loops visualizes the provenance of code, markdown, tables, visualizations, and images and their respective differences. Analysts can explore these differences in detail in a separate view. Loops not only makes the analysis process transparent but also supports analysts in their data science work by showing the effects of changes and facilitating comparison of multiple versions. We demonstrate our approach's utility and potential impact in two use cases and feedback from notebook users from various backgrounds. This paper and all supplemental materials are available at https://osf.io/79eyn.

探索性数据科学是一个获取、清理、剖析、分析和解释数据的迭代过程。这种循环往复的工作方式给计算笔记本的线性结构带来了挑战,导致代码质量、召回率和可重复性方面的问题。为了解决这个问题,我们提出了循环(Loops)技术,这是一套用于计算笔记本中迭代和探索性数据分析的可视化支持技术。Loops 利用来源信息来可视化笔记本中的变更所产生的影响。在可视化笔记本来源的过程中,我们可以追踪笔记本随时间的演变,并突出不同版本之间的差异。Loops 可视化代码、markdown、表格、可视化和图像的出处及其各自的差异。分析师可以在单独的视图中详细探索这些差异。Loops 不仅使分析过程透明化,还通过显示更改的影响和方便比较多个版本,为分析师的数据科学工作提供支持。我们通过两个使用案例和来自不同背景的笔记本用户的反馈,展示了我们的方法的实用性和潜在影响。本文及所有补充材料可在 https://osf.io/79eyn 上查阅。
{"title":"Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks.","authors":"Klaus Eckelt, Kiran Gadhave, Alexander Lex, Marc Streit","doi":"10.1109/TVCG.2024.3456186","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3456186","url":null,"abstract":"<p><p>Exploratory data science is an iterative process of obtaining, cleaning, profiling, analyzing, and interpreting data. This cyclical way of working creates challenges within the linear structure of computational notebooks, leading to issues with code quality, recall, and reproducibility. To remedy this, we present Loops, a set of visual support techniques for iterative and exploratory data analysis in computational notebooks. Loops leverages provenance information to visualize the impact of changes made within a notebook. In visualizations of the notebook provenance, we trace the evolution of the notebook over time and highlight differences between versions. Loops visualizes the provenance of code, markdown, tables, visualizations, and images and their respective differences. Analysts can explore these differences in detail in a separate view. Loops not only makes the analysis process transparent but also supports analysts in their data science work by showing the effects of changes and facilitating comparison of multiple versions. We demonstrate our approach's utility and potential impact in two use cases and feedback from notebook users from various backgrounds. This paper and all supplemental materials are available at https://osf.io/79eyn.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions. StuGPTViz:了解学生聊天互动的可视化分析方法。
Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3456363
Zixin Chen, Jiachen Wang, Meng Xia, Kento Shigyo, Dingdong Liu, Rong Zhang, Huamin Qu

The integration of Large Language Models (LLMs), especially ChatGPT, into education is poised to revolutionize students' learning experiences by introducing innovative conversational learning methodologies. To empower students to fully leverage the capabilities of ChatGPT in educational scenarios, understanding students' interaction patterns with ChatGPT is crucial for instructors. However, this endeavor is challenging due to the absence of datasets focused on student-ChatGPT conversations and the complexities in identifying and analyzing the evolutional interaction patterns within conversations. To address these challenges, we collected conversational data from 48 students interacting with ChatGPT in a master's level data visualization course over one semester. We then developed a coding scheme, grounded in the literature on cognitive levels and thematic analysis, to categorize students' interaction patterns with ChatGPT. Furthermore, we present a visual analytics system, StuGPTViz, that tracks and compares temporal patterns in student prompts and the quality of ChatGPT's responses at multiple scales, revealing significant pedagogical insights for instructors. We validated the system's effectiveness through expert interviews with six data visualization instructors and three case studies. The results confirmed StuGPTViz's capacity to enhance educators' insights into the pedagogical value of ChatGPT. We also discussed the potential research opportunities of applying visual analytics in education and developing AI-driven personalized learning solutions.

将大语言模型(LLM),尤其是 ChatGPT 融入教育领域,有望通过引入创新的会话学习方法彻底改变学生的学习体验。为了让学生在教育场景中充分利用 ChatGPT 的功能,了解学生与 ChatGPT 的交互模式对教师来说至关重要。然而,由于缺乏专注于学生与 ChatGPT 会话的数据集,以及识别和分析会话中演化交互模式的复杂性,这项工作具有挑战性。为了应对这些挑战,我们收集了 48 名学生在一个学期的硕士数据可视化课程中与 ChatGPT 互动的对话数据。然后,我们在认知水平和主题分析文献的基础上开发了一套编码方案,用于对学生与 ChatGPT 的交互模式进行分类。此外,我们还介绍了一个可视化分析系统 StuGPTViz,该系统可在多个尺度上跟踪和比较学生提示的时间模式和 ChatGPT 的响应质量,为教师提供重要的教学启示。我们通过对六位数据可视化讲师的专家访谈和三项案例研究验证了该系统的有效性。结果证实,StuGPTViz 能够提高教育工作者对 ChatGPT 教学价值的洞察力。我们还讨论了将可视化分析应用于教育和开发人工智能驱动的个性化学习解决方案的潜在研究机会。
{"title":"StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions.","authors":"Zixin Chen, Jiachen Wang, Meng Xia, Kento Shigyo, Dingdong Liu, Rong Zhang, Huamin Qu","doi":"10.1109/TVCG.2024.3456363","DOIUrl":"10.1109/TVCG.2024.3456363","url":null,"abstract":"<p><p>The integration of Large Language Models (LLMs), especially ChatGPT, into education is poised to revolutionize students' learning experiences by introducing innovative conversational learning methodologies. To empower students to fully leverage the capabilities of ChatGPT in educational scenarios, understanding students' interaction patterns with ChatGPT is crucial for instructors. However, this endeavor is challenging due to the absence of datasets focused on student-ChatGPT conversations and the complexities in identifying and analyzing the evolutional interaction patterns within conversations. To address these challenges, we collected conversational data from 48 students interacting with ChatGPT in a master's level data visualization course over one semester. We then developed a coding scheme, grounded in the literature on cognitive levels and thematic analysis, to categorize students' interaction patterns with ChatGPT. Furthermore, we present a visual analytics system, StuGPTViz, that tracks and compares temporal patterns in student prompts and the quality of ChatGPT's responses at multiple scales, revealing significant pedagogical insights for instructors. We validated the system's effectiveness through expert interviews with six data visualization instructors and three case studies. The results confirmed StuGPTViz's capacity to enhance educators' insights into the pedagogical value of ChatGPT. We also discussed the potential research opportunities of applying visual analytics in education and developing AI-driven personalized learning solutions.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic. DimBridge:用谓词逻辑交互式解释降维过程中的视觉模式
Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3456391
Brian Montambault, Gabriel Appleby, Jen Rogers, Camelia D Brumar, Mingwei Li, Remco Chang

Dimensionality reduction techniques are widely used for visualizing high-dimensional data. However, support for interpreting patterns of dimension reduction results in the context of the original data space is often insufficient. Consequently, users may struggle to extract insights from the projections. In this paper, we introduce DimBridge, a visual analytics tool that allows users to interact with visual patterns in a projection and retrieve corresponding data patterns. DimBridge supports several interactions, allowing users to perform various analyses, from contrasting multiple clusters to explaining complex latent structures. Leveraging first-order predicate logic, DimBridge identifies subspaces in the original dimensions relevant to a queried pattern and provides an interface for users to visualize and interact with them. We demonstrate how DimBridge can help users overcome the challenges associated with interpreting visual patterns in projections.

降维技术被广泛用于高维数据的可视化。然而,在原始数据空间的背景下解释降维结果模式的支持往往不足。因此,用户可能很难从投影中获得深刻的见解。在本文中,我们将介绍一种可视化分析工具--DimBridge,它允许用户与投影中的可视化模式进行交互,并检索相应的数据模式。DimBridge 支持多种交互,允许用户执行各种分析,从对比多个聚类到解释复杂的潜在结构。利用一阶谓词逻辑,DimBridge 可识别原始维度中与所查询模式相关的子空间,并为用户提供界面,使其可视化并与之互动。我们展示了 DimBridge 如何帮助用户克服与解释投影中的可视化模式相关的挑战。
{"title":"DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic.","authors":"Brian Montambault, Gabriel Appleby, Jen Rogers, Camelia D Brumar, Mingwei Li, Remco Chang","doi":"10.1109/TVCG.2024.3456391","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3456391","url":null,"abstract":"<p><p>Dimensionality reduction techniques are widely used for visualizing high-dimensional data. However, support for interpreting patterns of dimension reduction results in the context of the original data space is often insufficient. Consequently, users may struggle to extract insights from the projections. In this paper, we introduce DimBridge, a visual analytics tool that allows users to interact with visual patterns in a projection and retrieve corresponding data patterns. DimBridge supports several interactions, allowing users to perform various analyses, from contrasting multiple clusters to explaining complex latent structures. Leveraging first-order predicate logic, DimBridge identifies subspaces in the original dimensions relevant to a queried pattern and provides an interface for users to visualize and interact with them. We demonstrate how DimBridge can help users overcome the challenges associated with interpreting visual patterns in projections.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BEMTrace: Visualization-driven approach for deriving Building Energy Models from BIM. BEMTrace:从 BIM 导出建筑能源模型的可视化驱动方法。
Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3456315
Andreas Walch, Attila Szabo, Harald Steinlechner, Thomas Ortner, Eduard Groller, Johanna Schmidt

Building Information Modeling (BIM) describes a central data pool covering the entire life cycle of a construction project. Similarly, Building Energy Modeling (BEM) describes the process of using a 3D representation of a building as a basis for thermal simulations to assess the building's energy performance. This paper explores the intersection of BIM and BEM, focusing on the challenges and methodologies in converting BIM data into BEM representations for energy performance analysis. BEMTrace integrates 3D data wrangling techniques with visualization methodologies to enhance the accuracy and traceability of the BIM-to-BEM conversion process. Through parsing, error detection, and algorithmic correction of BIM data, our methods generate valid BEM models suitable for energy simulation. Visualization techniques provide transparent insights into the conversion process, aiding error identifcation, validation, and user comprehension. We introduce context-adaptive selections to facilitate user interaction and to show that the BEMTrace workfow helps users understand complex 3D data wrangling processes.

建筑信息模型(BIM)描述了一个涵盖建筑项目整个生命周期的中央数据池。同样,建筑能源建模(BEM)描述了使用建筑物的三维表示作为热模拟基础来评估建筑物能源性能的过程。本文探讨了 BIM 和 BEM 的交叉点,重点是将 BIM 数据转换为 BEM 表征用于能源性能分析的挑战和方法。BEMTrace 集成了三维数据处理技术和可视化方法,以提高 BIM 到 BEM 转换过程的准确性和可追溯性。通过对 BIM 数据进行解析、错误检测和算法修正,我们的方法可生成适用于能源模拟的有效 BEM 模型。可视化技术为转换过程提供了透明的视角,有助于错误识别、验证和用户理解。我们引入了上下文自适应选择,以促进用户互动,并表明 BEMTrace 工作流有助于用户理解复杂的三维数据处理过程。
{"title":"BEMTrace: Visualization-driven approach for deriving Building Energy Models from BIM.","authors":"Andreas Walch, Attila Szabo, Harald Steinlechner, Thomas Ortner, Eduard Groller, Johanna Schmidt","doi":"10.1109/TVCG.2024.3456315","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3456315","url":null,"abstract":"<p><p>Building Information Modeling (BIM) describes a central data pool covering the entire life cycle of a construction project. Similarly, Building Energy Modeling (BEM) describes the process of using a 3D representation of a building as a basis for thermal simulations to assess the building's energy performance. This paper explores the intersection of BIM and BEM, focusing on the challenges and methodologies in converting BIM data into BEM representations for energy performance analysis. BEMTrace integrates 3D data wrangling techniques with visualization methodologies to enhance the accuracy and traceability of the BIM-to-BEM conversion process. Through parsing, error detection, and algorithmic correction of BIM data, our methods generate valid BEM models suitable for energy simulation. Visualization techniques provide transparent insights into the conversion process, aiding error identifcation, validation, and user comprehension. We introduce context-adaptive selections to facilitate user interaction and to show that the BEMTrace workfow helps users understand complex 3D data wrangling processes.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1