首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing - A Design Study. 代达罗斯数据医疗制造中粒子的探索、知识外部化和标记--一项设计研究。
Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3456329
Alexander Wyss, Gabriela Morgenshtern, Amanda Hirsch-Husler, Jurgen Bernard

In medical diagnostics of both early disease detection and routine patient care, particle-based contamination of in-vitro diagnostics consumables poses a significant threat to patients. Objective data-driven decision-making on the severity of contamination is key for reducing patient risk, while saving time and cost in quality assessment. Our collaborators introduced us to their quality control process, including particle data acquisition through image recognition, feature extraction, and attributes reflecting the production context of particles. Shortcomings in the current process are limitations in exploring thousands of images, data-driven decision making, and ineffective knowledge externalization. Following the design study methodology, our contributions are a characterization of the problem space and requirements, the development and validation of DaedalusData, a comprehensive discussion of our study's learnings, and a generalizable framework for knowledge externalization. DaedalusData is a visual analytics system that enables domain experts to explore particle contamination patterns, label particles in label alphabets, and externalize knowledge through semi-supervised label-informed data projections. The results of our case study and user study show high usability of DaedalusData and its efficient support of experts in generating comprehensive overviews of thousands of particles, labeling of large quantities of particles, and externalizing knowledge to augment the dataset further. Reflecting on our approach, we discuss insights on dataset augmentation via human knowledge externalization, and on the scalability and trade-offs that come with the adoption of this approach in practice.

在早期疾病检测和常规病人护理的医疗诊断中,体外诊断耗材的微粒污染对病人构成重大威胁。对污染严重程度进行客观的数据驱动决策是降低患者风险的关键,同时还能节省质量评估的时间和成本。我们的合作者向我们介绍了他们的质量控制流程,包括通过图像识别获取颗粒数据、特征提取和反映颗粒生产环境的属性。当前流程的不足之处在于,在探索成千上万的图像、数据驱动决策和知识外部化方面存在局限性。按照设计研究的方法,我们的贡献包括对问题空间和需求的描述、代达罗斯数据(DaedalusData)的开发和验证、对研究心得的全面讨论以及知识外部化的通用框架。代达罗斯数据是一个可视化分析系统,它能让领域专家探索粒子污染模式,用标签字母表为粒子贴标签,并通过半监督标签信息数据投影将知识外部化。我们的案例研究和用户研究结果表明,DaedalusData 具有很高的可用性,能有效地支持专家生成数千个颗粒的综合概览、标记大量颗粒以及将知识外部化以进一步扩充数据集。在反思我们的方法时,我们讨论了通过人类知识外部化扩充数据集的见解,以及在实践中采用这种方法时的可扩展性和权衡问题。
{"title":"DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing - A Design Study.","authors":"Alexander Wyss, Gabriela Morgenshtern, Amanda Hirsch-Husler, Jurgen Bernard","doi":"10.1109/TVCG.2024.3456329","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3456329","url":null,"abstract":"<p><p>In medical diagnostics of both early disease detection and routine patient care, particle-based contamination of in-vitro diagnostics consumables poses a significant threat to patients. Objective data-driven decision-making on the severity of contamination is key for reducing patient risk, while saving time and cost in quality assessment. Our collaborators introduced us to their quality control process, including particle data acquisition through image recognition, feature extraction, and attributes reflecting the production context of particles. Shortcomings in the current process are limitations in exploring thousands of images, data-driven decision making, and ineffective knowledge externalization. Following the design study methodology, our contributions are a characterization of the problem space and requirements, the development and validation of DaedalusData, a comprehensive discussion of our study's learnings, and a generalizable framework for knowledge externalization. DaedalusData is a visual analytics system that enables domain experts to explore particle contamination patterns, label particles in label alphabets, and externalize knowledge through semi-supervised label-informed data projections. The results of our case study and user study show high usability of DaedalusData and its efficient support of experts in generating comprehensive overviews of thousands of particles, labeling of large quantities of particles, and externalizing knowledge to augment the dataset further. Reflecting on our approach, we discuss insights on dataset augmentation via human knowledge externalization, and on the scalability and trade-offs that come with the adoption of this approach in practice.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Enhancing Low Vision Usability of Data Charts on Smartphones. 提高智能手机数据图表的低视力可用性。
Pub Date : 2024-09-20 DOI: 10.1109/TVCG.2024.3456348
Yash Prakash, Pathan Aseef Khan, Akshay Kolgar Nayak, Sampath Jayarathna, Hae-Na Lee, Vikas Ashok

The importance of data charts is self-evident, given their ability to express complex data in a simple format that facilitates quick and easy comparisons, analysis, and consumption. However, the inherent visual nature of the charts creates barriers for people with visual impairments to reap the associated benefts to the same extent as their sighted peers. While extant research has predominantly focused on understanding and addressing these barriers for blind screen reader users, the needs of low-vision screen magnifer users have been largely overlooked. In an interview study, almost all low-vision participants stated that it was challenging to interact with data charts on small screen devices such as smartphones and tablets, even though they could technically "see" the chart content. They ascribed these challenges mainly to the magnifcation-induced loss of visual context that connected data points with each other and also with chart annotations, e.g., axis values. In this paper, we present a method that addresses this problem by automatically transforming charts that are typically non-interactive images into personalizable interactive charts which allow selective viewing of desired data points and preserve visual context as much as possible under screen enlargement. We evaluated our method in a usability study with 26 low-vision participants, who all performed a set of representative chart-related tasks under different study conditions. In the study, we observed that our method signifcantly improved the usability of charts over both the status quo screen magnifer and a state-of-the-art space compaction-based solution.

数据图表能够以简单的格式表达复杂的数据,便于快速方便地进行比较、分析和消费,其重要性不言而喻。然而,图表固有的可视性给视力障碍者带来了障碍,使他们无法像视力正常的人一样获得相关的益处。现有的研究主要集中在了解和解决盲人屏幕阅读器用户的这些障碍上,而低视力屏幕放大镜用户的需求却在很大程度上被忽视了。在一项访谈研究中,几乎所有低视力用户都表示,在智能手机和平板电脑等小屏幕设备上与数据图表进行交互具有挑战性,尽管他们在技术上可以 "看到 "图表内容。他们将这些挑战主要归咎于放大导致的视觉上下文丢失,而这种上下文将数据点之间以及与图表注释(如坐标轴值)之间联系起来。在本文中,我们提出了一种解决这一问题的方法,即自动将通常为非交互式图像的图表转换为可个性化的交互式图表,这种图表允许选择性地查看所需的数据点,并在屏幕放大的情况下尽可能保留视觉上下文。我们在一项可用性研究中对我们的方法进行了评估,共有 26 名低视力者参加了这项研究,他们都在不同的研究条件下完成了一系列与图表相关的代表性任务。在这项研究中,我们观察到我们的方法显著提高了图表的可用性,超过了现状屏幕放大镜和基于空间压缩的最先进解决方案。
{"title":"Towards Enhancing Low Vision Usability of Data Charts on Smartphones.","authors":"Yash Prakash, Pathan Aseef Khan, Akshay Kolgar Nayak, Sampath Jayarathna, Hae-Na Lee, Vikas Ashok","doi":"10.1109/TVCG.2024.3456348","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3456348","url":null,"abstract":"<p><p>The importance of data charts is self-evident, given their ability to express complex data in a simple format that facilitates quick and easy comparisons, analysis, and consumption. However, the inherent visual nature of the charts creates barriers for people with visual impairments to reap the associated benefts to the same extent as their sighted peers. While extant research has predominantly focused on understanding and addressing these barriers for blind screen reader users, the needs of low-vision screen magnifer users have been largely overlooked. In an interview study, almost all low-vision participants stated that it was challenging to interact with data charts on small screen devices such as smartphones and tablets, even though they could technically \"see\" the chart content. They ascribed these challenges mainly to the magnifcation-induced loss of visual context that connected data points with each other and also with chart annotations, e.g., axis values. In this paper, we present a method that addresses this problem by automatically transforming charts that are typically non-interactive images into personalizable interactive charts which allow selective viewing of desired data points and preserve visual context as much as possible under screen enlargement. We evaluated our method in a usability study with 26 low-vision participants, who all performed a set of representative chart-related tasks under different study conditions. In the study, we observed that our method signifcantly improved the usability of charts over both the status quo screen magnifer and a state-of-the-art space compaction-based solution.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration. 可视化地图集:通过数据、可视化和叙述来解释和探索复杂的主题。
Pub Date : 2024-09-20 DOI: 10.1109/TVCG.2024.3456311
Jinrui Wang, Xinhuan Shu, Benjamin Bach, Uta Hinrichs

This paper defines, analyzes, and discusses the emerging genre of visualization atlases. We currently witness an increase in web-based, data-driven initiatives that call themselves "atlases" while explaining complex, contemporary issues through data and visualizations: climate change, sustainability, AI, or cultural discoveries. To understand this emerging genre and inform their design, study, and authoring support, we conducted a systematic analysis of 33 visualization atlases and semi-structured interviews with eight visualization atlas creators. Based on our results, we contribute (1) a definition of a visualization atlas as a compendium of (web) pages aimed at explaining and supporting exploration of data about a dedicated topic through data, visualizations and narration. (2) a set of design patterns of 8 design dimensions, (3) insights into the atlas creation from interviews and (4) the definition of 5 visualization atlas genres. We found that visualization atlases are unique in the way they combine i) exploratory visualization, ii) narrative elements from data-driven storytelling and iii) structured navigation mechanisms. They target a wide range of audiences with different levels of domain knowledge, acting as tools for study, communication, and discovery. We conclude with a discussion of current design practices and emerging questions around the ethics and potential real-world impact of visualization atlases, aimed to inform the design and study of visualization atlases.

本文对可视化地图集这一新兴类型进行了定义、分析和讨论。目前,我们看到越来越多基于网络的数据驱动型计划自称为 "地图集",同时通过数据和可视化来解释复杂的当代问题:气候变化、可持续发展、人工智能或文化发现。为了了解这一新兴类型,并为其设计、研究和作者支持提供信息,我们对 33 幅可视化地图集进行了系统分析,并对 8 位可视化地图集创作者进行了半结构化访谈。根据分析结果,我们提出了以下几点:(1) 将可视化地图集定义为(网络)页面汇编,旨在通过数据、可视化和叙述来解释和支持对特定主题数据的探索。(2) 一套包含 8 个设计维度的设计模式,(3) 通过访谈对图集制作的见解,(4) 5 种可视化图集流派的定义。我们发现,可视化地图集的独特之处在于它们结合了 i) 探索性可视化;ii) 数据驱动的故事叙述元素;iii) 结构化导航机制。它们面向具有不同领域知识水平的广泛受众,是研究、交流和发现的工具。最后,我们将围绕可视化地图集的伦理和潜在现实影响,讨论当前的设计实践和新出现的问题,旨在为可视化地图集的设计和研究提供参考。
{"title":"Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration.","authors":"Jinrui Wang, Xinhuan Shu, Benjamin Bach, Uta Hinrichs","doi":"10.1109/TVCG.2024.3456311","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3456311","url":null,"abstract":"<p><p>This paper defines, analyzes, and discusses the emerging genre of visualization atlases. We currently witness an increase in web-based, data-driven initiatives that call themselves \"atlases\" while explaining complex, contemporary issues through data and visualizations: climate change, sustainability, AI, or cultural discoveries. To understand this emerging genre and inform their design, study, and authoring support, we conducted a systematic analysis of 33 visualization atlases and semi-structured interviews with eight visualization atlas creators. Based on our results, we contribute (1) a definition of a visualization atlas as a compendium of (web) pages aimed at explaining and supporting exploration of data about a dedicated topic through data, visualizations and narration. (2) a set of design patterns of 8 design dimensions, (3) insights into the atlas creation from interviews and (4) the definition of 5 visualization atlas genres. We found that visualization atlases are unique in the way they combine i) exploratory visualization, ii) narrative elements from data-driven storytelling and iii) structured navigation mechanisms. They target a wide range of audiences with different levels of domain knowledge, acting as tools for study, communication, and discovery. We conclude with a discussion of current design practices and emerging questions around the ethics and potential real-world impact of visualization atlases, aimed to inform the design and study of visualization atlases.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalization of CNNs on Relational Reasoning With Bar Charts. 利用条形图进行关系推理的 CNN 通用化。
Pub Date : 2024-09-19 DOI: 10.1109/TVCG.2024.3463800
Zhenxing Cui, Lu Chen, Yunhai Wang, Daniel Haehn, Yong Wang, Hanspeter Pfister

This paper presents a systematic study of the generalization of convolutional neural networks (CNNs) and humans on relational reasoning tasks with bar charts. We first revisit previous experiments on graphical perception and update the benchmark performance of CNNs. We then test the generalization performance of CNNs on a classic relational reasoning task: estimating bar length ratios in a bar chart, by progressively perturbing the standard visualizations. We further conduct a user study to compare the performance of CNNs and humans. Our results show that CNNs outperform humans only when the training and test data have the same visual encodings. Otherwise, they may perform worse. We also find that CNNs are sensitive to perturbations in various visual encodings, regardless of their relevance to the target bars. Yet, humans are mainly influenced by bar lengths. Our study suggests that robust relational reasoning with visualizations is challenging for CNNs. Improving CNNs' generalization performance may require training them to better recognize task-related visual properties.

本文系统研究了卷积神经网络(CNN)和人类在条形图关系推理任务中的泛化能力。我们首先重温了以前的图形感知实验,并更新了 CNN 的基准性能。然后,我们在一个经典的关系推理任务上测试了 CNN 的泛化性能:通过逐步扰动标准可视化,估计条形图中的条形长度比。我们还进行了一项用户研究,以比较 CNN 和人类的性能。我们的结果表明,只有当训练数据和测试数据具有相同的视觉编码时,CNN 的表现才会优于人类。否则,它们的表现可能会更差。我们还发现,CNN 对各种视觉编码的扰动非常敏感,无论这些扰动与目标条形图是否相关。然而,人类主要受到条形图长度的影响。我们的研究表明,利用可视化进行稳健的关系推理对 CNN 来说具有挑战性。要提高 CNN 的泛化性能,可能需要对其进行训练,使其更好地识别与任务相关的视觉属性。
{"title":"Generalization of CNNs on Relational Reasoning With Bar Charts.","authors":"Zhenxing Cui, Lu Chen, Yunhai Wang, Daniel Haehn, Yong Wang, Hanspeter Pfister","doi":"10.1109/TVCG.2024.3463800","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3463800","url":null,"abstract":"<p><p>This paper presents a systematic study of the generalization of convolutional neural networks (CNNs) and humans on relational reasoning tasks with bar charts. We first revisit previous experiments on graphical perception and update the benchmark performance of CNNs. We then test the generalization performance of CNNs on a classic relational reasoning task: estimating bar length ratios in a bar chart, by progressively perturbing the standard visualizations. We further conduct a user study to compare the performance of CNNs and humans. Our results show that CNNs outperform humans only when the training and test data have the same visual encodings. Otherwise, they may perform worse. We also find that CNNs are sensitive to perturbations in various visual encodings, regardless of their relevance to the target bars. Yet, humans are mainly influenced by bar lengths. Our study suggests that robust relational reasoning with visualizations is challenging for CNNs. Improving CNNs' generalization performance may require training them to better recognize task-related visual properties.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Complementary Filter for Hybrid Inside-Out Outside-In HMD Tracking With Smooth Transitions. 自适应互补滤波器用于具有平滑过渡功能的内外混合 HMD 跟踪。
Pub Date : 2024-09-19 DOI: 10.1109/TVCG.2024.3464738
Riccardo Monica, Dario Lodi Rizzini, Jacopo Aleotti

Head-mounted displays (HMDs) in room-scale virtual reality are usually tracked using inside-out visual SLAM algorithms. Alternatively, to track the motion of the HMD with respect to a fixed real-world reference frame, an outside-in instrumentation like a motion capture system can be adopted. However, outside-in tracking systems may temporarily lose tracking as they suffer by occlusion and blind spots. A possible solution is to adopt a hybrid approach where the inside-out tracker of the HMD is augmented with an outside-in sensing system. On the other hand, when the tracking signal of the outside-in system is recovered after a loss of tracking the transition from inside-out tracking to hybrid tracking may generate a discontinuity, i.e a sudden change of the virtual viewpoint, that can be uncomfortable for the user. Therefore, hybrid tracking solutions for HMDs require advanced sensor fusion algorithms to obtain a smooth transition. This work proposes a method for hybrid tracking of a HMD with smooth transitions based on an adaptive complementary filter. The proposed approach can be configured with several parameters that determine a trade-off between user experience and tracking error. A user study was carried out in a room-scale virtual reality environment, where users carried out two different tasks while multiple signal tracking losses of the outside-in sensor system occurred. The results show that the proposed approach improves user experience compared to a standard Extended Kalman Filter, and that tracking error is lower compared to a state-of-the-art complementary filter when configured for the same quality of user experience.

室内虚拟现实中的头戴式显示器(HMD)通常使用内向外视觉 SLAM 算法进行跟踪。另外,为了跟踪头戴式显示器相对于固定现实世界参考帧的运动,也可以采用运动捕捉系统等外入式仪器。然而,外入式跟踪系统可能会暂时失去跟踪能力,因为它们会受到遮挡和盲点的影响。一种可行的解决方案是采用混合方法,即在 HMD 的由内向外跟踪器上增加一个由外向内的传感系统。另一方面,当外入式系统的跟踪信号在失去跟踪后恢复时,从内向外跟踪到混合跟踪的过渡可能会产生不连续性,即虚拟视点的突然变化,这会让用户感到不舒服。因此,用于 HMD 的混合跟踪解决方案需要先进的传感器融合算法来实现平稳过渡。本作品提出了一种基于自适应互补滤波器的平滑过渡 HMD 混合跟踪方法。所提出的方法可配置多个参数,这些参数决定了用户体验与跟踪误差之间的权衡。在房间规模的虚拟现实环境中进行了一项用户研究,用户在执行两项不同任务的同时,外入式传感器系统出现了多个信号跟踪损失。结果表明,与标准的扩展卡尔曼滤波器相比,所提出的方法改善了用户体验,而且在配置相同的用户体验质量时,与最先进的互补滤波器相比,跟踪误差更小。
{"title":"Adaptive Complementary Filter for Hybrid Inside-Out Outside-In HMD Tracking With Smooth Transitions.","authors":"Riccardo Monica, Dario Lodi Rizzini, Jacopo Aleotti","doi":"10.1109/TVCG.2024.3464738","DOIUrl":"10.1109/TVCG.2024.3464738","url":null,"abstract":"<p><p>Head-mounted displays (HMDs) in room-scale virtual reality are usually tracked using inside-out visual SLAM algorithms. Alternatively, to track the motion of the HMD with respect to a fixed real-world reference frame, an outside-in instrumentation like a motion capture system can be adopted. However, outside-in tracking systems may temporarily lose tracking as they suffer by occlusion and blind spots. A possible solution is to adopt a hybrid approach where the inside-out tracker of the HMD is augmented with an outside-in sensing system. On the other hand, when the tracking signal of the outside-in system is recovered after a loss of tracking the transition from inside-out tracking to hybrid tracking may generate a discontinuity, i.e a sudden change of the virtual viewpoint, that can be uncomfortable for the user. Therefore, hybrid tracking solutions for HMDs require advanced sensor fusion algorithms to obtain a smooth transition. This work proposes a method for hybrid tracking of a HMD with smooth transitions based on an adaptive complementary filter. The proposed approach can be configured with several parameters that determine a trade-off between user experience and tracking error. A user study was carried out in a room-scale virtual reality environment, where users carried out two different tasks while multiple signal tracking losses of the outside-in sensor system occurred. The results show that the proposed approach improves user experience compared to a standard Extended Kalman Filter, and that tracking error is lower compared to a state-of-the-art complementary filter when configured for the same quality of user experience.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rapid and Precise Topological Comparison with Merge Tree Neural Networks. 利用合并树神经网络进行快速精确的拓扑比较。
Pub Date : 2024-09-19 DOI: 10.1109/TVCG.2024.3456395
Yu Qin, Brittany Terese Fasy, Carola Wenk, Brian Summa

Merge trees are a valuable tool in the scientific visualization of scalar fields; however, current methods for merge tree comparisons are computationally expensive, primarily due to the exhaustive matching between tree nodes. To address this challenge, we introduce the Merge Tree Neural Network (MTNN), a learned neural network model designed for merge tree comparison. The MTNN enables rapid and high-quality similarity computation. We first demonstrate how to train graph neural networks, which emerged as effective encoders for graphs, in order to produce embeddings of merge trees in vector spaces for efficient similarity comparison. Next, we formulate the novel MTNN model that further improves the similarity comparisons by integrating the tree and node embeddings with a new topological attention mechanism. We demonstrate the effectiveness of our model on real-world data in different domains and examine our model's generalizability across various datasets. Our experimental analysis demonstrates our approach's superiority in accuracy and efficiency. In particular, we speed up the prior state-of-the-art by more than 100× on the benchmark datasets while maintaining an error rate below 0.1%.

合并树是标量领域科学可视化的重要工具;然而,目前的合并树比较方法计算成本高昂,这主要是由于树节点之间需要进行穷举匹配。为了应对这一挑战,我们引入了合并树神经网络(MTNN),这是一种专为合并树比较设计的学习型神经网络模型。MTNN 可以实现快速、高质量的相似性计算。我们首先演示了如何训练图神经网络(作为图的有效编码器出现),以便在向量空间中生成合并树的嵌入,从而实现高效的相似性比较。接下来,我们建立了新颖的 MTNN 模型,通过将树和节点嵌入与新的拓扑关注机制相结合,进一步改进了相似性比较。我们在不同领域的真实数据上演示了模型的有效性,并检验了模型在不同数据集上的通用性。我们的实验分析证明了我们的方法在准确性和效率方面的优势。特别是,在基准数据集上,我们的速度比先前的先进水平提高了 100 倍以上,而错误率却保持在 0.1% 以下。
{"title":"Rapid and Precise Topological Comparison with Merge Tree Neural Networks.","authors":"Yu Qin, Brittany Terese Fasy, Carola Wenk, Brian Summa","doi":"10.1109/TVCG.2024.3456395","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3456395","url":null,"abstract":"<p><p>Merge trees are a valuable tool in the scientific visualization of scalar fields; however, current methods for merge tree comparisons are computationally expensive, primarily due to the exhaustive matching between tree nodes. To address this challenge, we introduce the Merge Tree Neural Network (MTNN), a learned neural network model designed for merge tree comparison. The MTNN enables rapid and high-quality similarity computation. We first demonstrate how to train graph neural networks, which emerged as effective encoders for graphs, in order to produce embeddings of merge trees in vector spaces for efficient similarity comparison. Next, we formulate the novel MTNN model that further improves the similarity comparisons by integrating the tree and node embeddings with a new topological attention mechanism. We demonstrate the effectiveness of our model on real-world data in different domains and examine our model's generalizability across various datasets. Our experimental analysis demonstrates our approach's superiority in accuracy and efficiency. In particular, we speed up the prior state-of-the-art by more than 100× on the benchmark datasets while maintaining an error rate below 0.1%.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gesture2Text: A Generalizable Decoder for Word-Gesture Keyboards in XR Through Trajectory Coarse Discretization and Pre-Training Gesture2Text:通过轨迹粗离散化和预训练为 XR 中的文字手势键盘设计通用解码器。
Pub Date : 2024-09-16 DOI: 10.1109/TVCG.2024.3456198
Junxiao Shen;Khadija Khaldi;Enmin Zhou;Hemant Bhaskar Surale;Amy Karlson
Text entry with word-gesture keyboards (WGK) is emerging as a popular method and becoming a key interaction for Extended Reality (XR). However, the diversity of interaction modes, keyboard sizes, and visual feedback in these environments introduces divergent word-gesture trajectory data patterns, thus leading to complexity in decoding trajectories into text. Template-matching decoding methods, such as SHARK2 [32], are commonly used for these WGK systems because they are easy to implement and configure. However, these methods are susceptible to decoding inaccuracies for noisy trajectories. While conventional neural-network-based decoders (neural decoders) trained on word-gesture trajectory data have been proposed to improve accuracy, they have their own limitations: they require extensive data for training and deep-learning expertise for implementation. To address these challenges, we propose a novel solution that combines ease of implementation with high decoding accuracy: a generalizable neural decoder enabled by pre-training on large-scale coarsely discretized word-gesture trajectories. This approach produces a ready-to-use WGK decoder that is generalizable across mid-air and on-surface WGK systems in augmented reality (AR) and virtual reality (VR), which is evident by a robust average Top-4 accuracy of 90.4% on four diverse datasets. It significantly outperforms SHARK2 with a 37.2% enhancement and surpasses the conventional neural decoder by 7.4%. Moreover, the Pre-trained Neural Decoder's size is only 4 MB after quantization, without sacrificing accuracy, and it can operate in real-time, executing in just 97 milliseconds on Quest 3.
使用文字手势键盘(WGK)输入文本正在成为一种流行的方法,并成为扩展现实(XR)的一种关键交互方式。然而,在这些环境中,交互模式、键盘尺寸和视觉反馈的多样性带来了不同的文字手势轨迹数据模式,从而导致将轨迹解码为文本的复杂性。模板匹配解码方法(如 SHARK2 [32])通常用于这些 WGK 系统,因为它们易于实现和配置。然而,这些方法在解码噪声轨迹时容易出现误差。虽然有人提出了基于神经网络的传统解码器(神经解码器)来提高准确性,但它们也有自身的局限性:它们需要大量数据进行训练,并需要深度学习的专业知识来实现。为了应对这些挑战,我们提出了一种新颖的解决方案,该方案兼具易实施性和高解码准确性:通过在大规模粗离散词句轨迹上进行预训练,实现可通用的神经解码器。这种方法产生了一种即用型 WGK 解码器,可用于增强现实(AR)和虚拟现实(VR)中的空中和地面 WGK 系统,在四个不同的数据集上,Top-4 平均准确率高达 90.4%。它明显优于 SHARK2,提高了 37.2%,比传统神经解码器高出 7.4%。此外,预训练神经解码器在量化后的大小仅为 4 MB,且不影响准确性,而且可以实时运行,在 Quest 3 上的执行时间仅为 97 毫秒。
{"title":"Gesture2Text: A Generalizable Decoder for Word-Gesture Keyboards in XR Through Trajectory Coarse Discretization and Pre-Training","authors":"Junxiao Shen;Khadija Khaldi;Enmin Zhou;Hemant Bhaskar Surale;Amy Karlson","doi":"10.1109/TVCG.2024.3456198","DOIUrl":"10.1109/TVCG.2024.3456198","url":null,"abstract":"Text entry with word-gesture keyboards (WGK) is emerging as a popular method and becoming a key interaction for Extended Reality (XR). However, the diversity of interaction modes, keyboard sizes, and visual feedback in these environments introduces divergent word-gesture trajectory data patterns, thus leading to complexity in decoding trajectories into text. Template-matching decoding methods, such as SHARK2 [32], are commonly used for these WGK systems because they are easy to implement and configure. However, these methods are susceptible to decoding inaccuracies for noisy trajectories. While conventional neural-network-based decoders (neural decoders) trained on word-gesture trajectory data have been proposed to improve accuracy, they have their own limitations: they require extensive data for training and deep-learning expertise for implementation. To address these challenges, we propose a novel solution that combines ease of implementation with high decoding accuracy: a generalizable neural decoder enabled by pre-training on large-scale coarsely discretized word-gesture trajectories. This approach produces a ready-to-use WGK decoder that is generalizable across mid-air and on-surface WGK systems in augmented reality (AR) and virtual reality (VR), which is evident by a robust average Top-4 accuracy of 90.4% on four diverse datasets. It significantly outperforms SHARK2 with a 37.2% enhancement and surpasses the conventional neural decoder by 7.4%. Moreover, the Pre-trained Neural Decoder's size is only 4 MB after quantization, without sacrificing accuracy, and it can operate in real-time, executing in just 97 milliseconds on Quest 3.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shape It Up: An Empirically Grounded Approach for Designing Shape Palettes. 塑造它:基于经验的形状调色板设计方法》。
Pub Date : 2024-09-16 DOI: 10.1109/TVCG.2024.3456385
Chin Tseng, Arran Zeyu Wang, Ghulam Jilani Quadri, Danielle Albers Szafir

Shape is commonly used to distinguish between categories in multi-class scatterplots. However, existing guidelines for choosing effective shape palettes rely largely on intuition and do not consider how these needs may change as the number of categories increases. Unlike color, shapes can not be represented by a numerical space, making it difficult to propose general guidelines or design heuristics for using shape effectively. This paper presents a series of four experiments evaluating the efficiency of 39 shapes across three tasks: relative mean judgment tasks, expert preference, and correlation estimation. Our results show that conventional means for reasoning about shapes, such as filled versus unfilled, are insufficient to inform effective palette design. Further, even expert palettes vary significantly in their use of shape and corresponding effectiveness. To support effective shape palette design, we developed a model based on pairwise relations between shapes in our experiments and the number of shapes required for a given design. We embed this model in a palette design tool to give designers agency over shape selection while incorporating empirical elements of perceptual performance captured in our study. Our model advances understanding of shape perception in visualization contexts and provides practical design guidelines that can help improve categorical data encodings.

形状通常用于区分多类别散点图中的类别。然而,现有的选择有效形状调色板的指南主要依靠直觉,并没有考虑随着类别数量的增加,这些需求会如何变化。与颜色不同,形状不能用数字空间来表示,因此很难提出有效使用形状的通用指南或设计启发式方法。本文介绍了一系列四项实验,评估了 39 种形状在三项任务中的效率:相对平均值判断任务、专家偏好和相关性估计。我们的结果表明,对形状进行推理的传统方法,如填充与非填充,不足以指导有效的调色板设计。此外,即使是专家调色板,在使用形状和相应的有效性方面也存在很大差异。为了支持有效的形状调色板设计,我们根据实验中形状之间的配对关系以及特定设计所需的形状数量开发了一个模型。我们将这一模型嵌入到调色板设计工具中,使设计者能够自主选择形状,同时将我们研究中捕捉到的感知性能的经验要素纳入其中。我们的模型加深了人们对可视化环境中形状感知的理解,并提供了有助于改进分类数据编码的实用设计指南。
{"title":"Shape It Up: An Empirically Grounded Approach for Designing Shape Palettes.","authors":"Chin Tseng, Arran Zeyu Wang, Ghulam Jilani Quadri, Danielle Albers Szafir","doi":"10.1109/TVCG.2024.3456385","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3456385","url":null,"abstract":"<p><p>Shape is commonly used to distinguish between categories in multi-class scatterplots. However, existing guidelines for choosing effective shape palettes rely largely on intuition and do not consider how these needs may change as the number of categories increases. Unlike color, shapes can not be represented by a numerical space, making it difficult to propose general guidelines or design heuristics for using shape effectively. This paper presents a series of four experiments evaluating the efficiency of 39 shapes across three tasks: relative mean judgment tasks, expert preference, and correlation estimation. Our results show that conventional means for reasoning about shapes, such as filled versus unfilled, are insufficient to inform effective palette design. Further, even expert palettes vary significantly in their use of shape and corresponding effectiveness. To support effective shape palette design, we developed a model based on pairwise relations between shapes in our experiments and the number of shapes required for a given design. We embed this model in a palette design tool to give designers agency over shape selection while incorporating empirical elements of perceptual performance captured in our study. Our model advances understanding of shape perception in visualization contexts and provides practical design guidelines that can help improve categorical data encodings.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HaptoFloater: Visuo-Haptic Augmented Reality by Embedding Imperceptible Color Vibration Signals for Tactile Display Control in a Mid-Air Image HaptoFloater:通过在半空图像中嵌入用于触觉显示控制的可感知彩色振动信号,实现视觉-触觉增强现实技术
Pub Date : 2024-09-16 DOI: 10.1109/TVCG.2024.3456175
Rina Nagano;Takahiro Kinoshita;Shingo Hattori;Yuichi Hiroi;Yuta Itoh;Takefumi Hiraki
We propose HaptoFloater, a low-latency mid-air visuo-haptic augmented reality (VHAR) system that utilizes imperceptible color vibrations. When adding tactile stimuli to the visual information of a mid-air image, the user should not perceive the latency between the tactile and visual information. However, conventional tactile presentation methods for mid-air images, based on camera-detected fingertip positioning, introduce latency due to image processing and communication. To mitigate this latency, we use a color vibration technique; humans cannot perceive the vibration when the display alternates between two different color stimuli at a frequency of 25 Hz or higher. In our system, we embed this imperceptible color vibration into the mid-air image formed by a micromirror array plate, and a photodiode on the fingertip device directly detects this color vibration to provide tactile stimulation. Thus, our system allows for the tactile perception of multiple patterns on a mid-air image in 59.5 ms. In addition, we evaluate the visual-haptic delay tolerance on a mid-air display using our VHAR system and a tactile actuator with a single pattern and faster response time. The results of our user study indicate a visual-haptic delay tolerance of 110.6 ms, which is considerably larger than the latency associated with systems using multiple tactile patterns.
我们提出的 HaptoFloater 是一种低延迟的半空视觉-触觉增强现实(VHAR)系统,它利用了不易察觉的颜色振动。当在半空中图像的视觉信息中添加触觉刺激时,用户应该感觉不到触觉信息和视觉信息之间的延迟。然而,传统的半空图像触觉呈现方法基于相机检测到的指尖定位,会因图像处理和通信而产生延迟。为了减少这种延迟,我们采用了色彩振动技术;当显示屏以 25 赫兹或更高的频率交替显示两种不同的色彩刺激时,人类无法感知振动。在我们的系统中,我们将这种不易察觉的色彩振动嵌入微镜阵列板形成的半空图像中,指尖装置上的光电二极管直接检测这种色彩振动,从而提供触觉刺激。因此,我们的系统可在 59.5 毫秒内对半空图像上的多个图案进行触觉感知。此外,我们还评估了使用我们的 VHAR 系统和具有单一图案和更快响应时间的触觉致动器在半空中显示屏上的视觉-触觉延迟容忍度。用户研究结果表明,视觉-触觉延迟耐受时间为 110.6 毫秒,大大高于使用多种触觉图案的系统的延迟时间。
{"title":"HaptoFloater: Visuo-Haptic Augmented Reality by Embedding Imperceptible Color Vibration Signals for Tactile Display Control in a Mid-Air Image","authors":"Rina Nagano;Takahiro Kinoshita;Shingo Hattori;Yuichi Hiroi;Yuta Itoh;Takefumi Hiraki","doi":"10.1109/TVCG.2024.3456175","DOIUrl":"10.1109/TVCG.2024.3456175","url":null,"abstract":"We propose HaptoFloater, a low-latency mid-air visuo-haptic augmented reality (VHAR) system that utilizes imperceptible color vibrations. When adding tactile stimuli to the visual information of a mid-air image, the user should not perceive the latency between the tactile and visual information. However, conventional tactile presentation methods for mid-air images, based on camera-detected fingertip positioning, introduce latency due to image processing and communication. To mitigate this latency, we use a color vibration technique; humans cannot perceive the vibration when the display alternates between two different color stimuli at a frequency of 25 Hz or higher. In our system, we embed this imperceptible color vibration into the mid-air image formed by a micromirror array plate, and a photodiode on the fingertip device directly detects this color vibration to provide tactile stimulation. Thus, our system allows for the tactile perception of multiple patterns on a mid-air image in 59.5 ms. In addition, we evaluate the visual-haptic delay tolerance on a mid-air display using our VHAR system and a tactile actuator with a single pattern and faster response time. The results of our user study indicate a visual-haptic delay tolerance of 110.6 ms, which is considerably larger than the latency associated with systems using multiple tactile patterns.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MobiTangibles: Enabling Physical Manipulation Experiences of Virtual Precision Hand-Held Tools' Miniature Control in VR MobiTangibles:在 VR 中实现虚拟精密手持工具微型控制的物理操纵体验
Pub Date : 2024-09-13 DOI: 10.1109/TVCG.2024.3456191
Abhijeet Mishra;Harshvardhan Singh;Aman Parnami;Jainendra Shukla
Realistic simulation for miniature control interactions, typically identified by precise and confined motions, commonly found in precision hand-held tools, like calipers, powered engravers, retractable knives, etc., are beneficial for skill training associated with these kinds of tools in virtual reality (VR) environments. However, existing approaches aiming to simulate hand-held tools' miniature control manipulation experiences in VR entail prototyping complexity and require expertise, posing challenges for novice users and individuals with limited resources. Addressing this challenge, we introduce MobiTangibles—proxies for precision hand-held tools' miniature control interactions utilizing smartphone-based magnetic field sensing. MobiTangibles passively replicate fundamental miniature control experiences associated with hand-held tools, such as single-axis translation and rotation, enabling quick and easy use for diverse VR scenarios without requiring extensive technical knowledge. We conducted a comprehensive technical evaluation to validate the functionality of MobiTangibles across diverse settings, including evaluations for electromagnetic interference within indoor environments. In a user-centric evaluation involving 15 participants across bare hands, VR controllers, and MobiTangibles conditions, we further assessed the quality of miniaturized manipulation experiences in VR. Our findings indicate that MobiTangibles outperformed conventional methods in realism and fatigue, receiving positive feedback.
精密手持工具(如卡尺、电动雕刻机、伸缩刀等)中常见的微型控制交互通常由精确和有限的运动来识别,对微型控制交互的真实模拟有利于在虚拟现实(VR)环境中进行与这类工具相关的技能培训。然而,现有的旨在模拟 VR 中手持工具微型控制操作体验的方法需要复杂的原型设计和专业知识,这给新手用户和资源有限的个人带来了挑战。为了应对这一挑战,我们推出了 MobiTangibles--利用基于智能手机的磁场感应来模拟精密手持工具的微型控制交互。MobiTangibles 可被动复制与手持工具相关的基本微型控制体验,例如单轴平移和旋转,从而无需丰富的技术知识即可快速、轻松地用于各种 VR 场景。我们进行了全面的技术评估,以验证 MobiTangibles 在不同环境下的功能,包括室内环境电磁干扰评估。在一项以用户为中心的评估中,我们进一步评估了在 VR 中微型操作体验的质量,共有 15 名参与者参与了徒手、VR 控制器和 MobiTangibles 的评估。我们的研究结果表明,MobiTangibles 在逼真度和疲劳度方面优于传统方法,获得了积极的反馈。
{"title":"MobiTangibles: Enabling Physical Manipulation Experiences of Virtual Precision Hand-Held Tools' Miniature Control in VR","authors":"Abhijeet Mishra;Harshvardhan Singh;Aman Parnami;Jainendra Shukla","doi":"10.1109/TVCG.2024.3456191","DOIUrl":"10.1109/TVCG.2024.3456191","url":null,"abstract":"Realistic simulation for miniature control interactions, typically identified by precise and confined motions, commonly found in precision hand-held tools, like calipers, powered engravers, retractable knives, etc., are beneficial for skill training associated with these kinds of tools in virtual reality (VR) environments. However, existing approaches aiming to simulate hand-held tools' miniature control manipulation experiences in VR entail prototyping complexity and require expertise, posing challenges for novice users and individuals with limited resources. Addressing this challenge, we introduce MobiTangibles—proxies for precision hand-held tools' miniature control interactions utilizing smartphone-based magnetic field sensing. MobiTangibles passively replicate fundamental miniature control experiences associated with hand-held tools, such as single-axis translation and rotation, enabling quick and easy use for diverse VR scenarios without requiring extensive technical knowledge. We conducted a comprehensive technical evaluation to validate the functionality of MobiTangibles across diverse settings, including evaluations for electromagnetic interference within indoor environments. In a user-centric evaluation involving 15 participants across bare hands, VR controllers, and MobiTangibles conditions, we further assessed the quality of miniaturized manipulation experiences in VR. Our findings indicate that MobiTangibles outperformed conventional methods in realism and fatigue, receiving positive feedback.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1