首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
RobustMap: Visual Exploration of DNN Adversarial Robustness in Generative Latent Space. RobustMap:生成潜空间中 DNN 对抗鲁棒性的可视化探索。
Pub Date : 2024-10-03 DOI: 10.1109/TVCG.2024.3471551
Jie Li, Jielong Kuang

The paper presents a novel approach to visualizing adversarial robustness (called robustness below) of deep neural networks (DNNs). Traditional tests only return a value reflecting a DNN's overall robustness across a fixed number of test samples. Unlike them, we use test samples to train a generative model (GM) and render a DNN's robustness distribution over infinite generated samples within the GM's latent space. The approach extends test samples, enabling users to obtain new test samples to improve feature coverage constantly. Moreover, the distribution provides more information about a DNN's robustness, enabling users to understand a DNN's robustness comprehensively. We propose three methods to resolve the challenges of realizing the approach. Specifically, we (1) map a GM's high-dimensional latent space onto a plane with less information loss for visualization, (2) design a network to predict a DNN's robustness on massive samples to speed up the distribution rendering, and (3) develop a system to supports users to explore the distribution from multiple perspectives. Subjective and objective experiment results prove the usability and effectiveness of the approach.

本文提出了一种可视化深度神经网络(DNN)对抗鲁棒性(以下称为鲁棒性)的新方法。传统测试只返回一个值,反映 DNN 在固定数量测试样本中的整体鲁棒性。与之不同的是,我们使用测试样本来训练生成模型(GM),并呈现 DNN 在 GM 潜在空间内无限生成样本上的鲁棒性分布。这种方法扩展了测试样本,使用户能够获得新的测试样本,从而不断提高特征覆盖率。此外,该分布还提供了更多有关 DNN 鲁棒性的信息,使用户能够全面了解 DNN 的鲁棒性。我们提出了三种方法来解决实现该方法所面临的挑战。具体来说,我们(1)将 GM 的高维潜空间映射到信息损失较少的平面上,以实现可视化;(2)设计一个网络,在海量样本上预测 DNN 的鲁棒性,以加快分布渲染的速度;(3)开发一个系统,支持用户从多个角度探索分布。主观和客观的实验结果证明了该方法的可用性和有效性。
{"title":"RobustMap: Visual Exploration of DNN Adversarial Robustness in Generative Latent Space.","authors":"Jie Li, Jielong Kuang","doi":"10.1109/TVCG.2024.3471551","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3471551","url":null,"abstract":"<p><p>The paper presents a novel approach to visualizing adversarial robustness (called robustness below) of deep neural networks (DNNs). Traditional tests only return a value reflecting a DNN's overall robustness across a fixed number of test samples. Unlike them, we use test samples to train a generative model (GM) and render a DNN's robustness distribution over infinite generated samples within the GM's latent space. The approach extends test samples, enabling users to obtain new test samples to improve feature coverage constantly. Moreover, the distribution provides more information about a DNN's robustness, enabling users to understand a DNN's robustness comprehensively. We propose three methods to resolve the challenges of realizing the approach. Specifically, we (1) map a GM's high-dimensional latent space onto a plane with less information loss for visualization, (2) design a network to predict a DNN's robustness on massive samples to speed up the distribution rendering, and (3) develop a system to supports users to explore the distribution from multiple perspectives. Subjective and objective experiment results prove the usability and effectiveness of the approach.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142373928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smart Pipette: Elevating Laboratory Performance with Tactile Authenticity and Real-Time Feedback. 智能移液器:通过触感真实性和实时反馈提升实验室性能。
Pub Date : 2024-10-02 DOI: 10.1109/TVCG.2024.3472837
Juan M Pieschacon, Maurizio Costabile, Andrew Cunningham, Joanne Zucco, Stewart Von Itzstein, Ross T Smith

Mastering the correct use of laboratory equipment is a fundamental skill for undergraduate science students involved in laboratory-based training. However, hands-on laboratory time is often limited, and remote students may struggle as their absence from the physical lab limits their skill development. An air-displacement micropipette was selected for our initial investigation, as accuracy and correct technique are essential in generating reliable assay data. Handling small liquid volumes demands hand dexterity and practice to achieve proficiency. This research assesses the importance of tactile authenticity during training by faithfully replicating the micropipette's key physical and operational characteristics. We developed a custom haptic training approach called 'Smart Pipette' which promotes accurate operation and enhances laboratory dexterity training. A comparative user study with 34 participants evaluated the effectiveness of the Smart Pipette custom haptic device against training with off-the-shelf hardware, specifically the Quest VR hand controller, which was chosen because it is held mid-air similar to a laboratory micropipette. Both training conditions are integrated with the same self-paced virtual simulation displayed on a computer screen, offering clear video instructions and realtime guidance. Results demonstrated that participants trained with the Smart Pipette custom haptic exhibited increased accuracy and precision while making fewer errors than those trained with off-the-shelf hardware. The Smart Pipette and the Quest VR controller had no significant differences in cognitive load and system usability scores. Tactile authentic interaction devices address challenges faced by online learners, while their applicability extends to traditional classrooms, where real-time feedback significantly enhances overall training performance outcomes.

掌握实验室设备的正确使用方法是理科本科生参与实验室培训的一项基本技能。然而,动手实验的时间往往有限,偏远地区的学生可能会因为远离实体实验室而在技能发展方面受到限制。我们的初步研究选择了空气置换微量移液器,因为准确性和正确的技术对于生成可靠的化验数据至关重要。处理小容量液体需要手的灵活性和练习才能达到熟练程度。本研究通过忠实再现微量移液器的关键物理和操作特性,评估了培训过程中触觉真实性的重要性。我们开发了一种名为 "智能移液器 "的定制触觉培训方法,可促进准确操作并加强实验室灵巧性培训。一项有 34 名参与者参加的用户对比研究评估了 Smart Pipette 定制触觉设备与现成硬件(特别是 Quest VR 手部控制器)的训练效果,之所以选择 Quest VR 手部控制器,是因为它在半空中的握持方式与实验室用微量移液管类似。两种训练条件都与计算机屏幕上显示的自定进度虚拟模拟相结合,提供清晰的视频说明和实时指导。结果表明,与使用现成硬件进行培训的学员相比,使用智能移液器定制触觉器进行培训的学员表现出更高的准确性和精确度,同时错误更少。智能移液管和 Quest VR 控制器在认知负荷和系统可用性评分方面没有显著差异。触觉真实交互设备解决了在线学习者面临的挑战,同时其适用性也扩展到了传统课堂,在传统课堂上,实时反馈能显著提高整体培训效果。
{"title":"Smart Pipette: Elevating Laboratory Performance with Tactile Authenticity and Real-Time Feedback.","authors":"Juan M Pieschacon, Maurizio Costabile, Andrew Cunningham, Joanne Zucco, Stewart Von Itzstein, Ross T Smith","doi":"10.1109/TVCG.2024.3472837","DOIUrl":"10.1109/TVCG.2024.3472837","url":null,"abstract":"<p><p>Mastering the correct use of laboratory equipment is a fundamental skill for undergraduate science students involved in laboratory-based training. However, hands-on laboratory time is often limited, and remote students may struggle as their absence from the physical lab limits their skill development. An air-displacement micropipette was selected for our initial investigation, as accuracy and correct technique are essential in generating reliable assay data. Handling small liquid volumes demands hand dexterity and practice to achieve proficiency. This research assesses the importance of tactile authenticity during training by faithfully replicating the micropipette's key physical and operational characteristics. We developed a custom haptic training approach called 'Smart Pipette' which promotes accurate operation and enhances laboratory dexterity training. A comparative user study with 34 participants evaluated the effectiveness of the Smart Pipette custom haptic device against training with off-the-shelf hardware, specifically the Quest VR hand controller, which was chosen because it is held mid-air similar to a laboratory micropipette. Both training conditions are integrated with the same self-paced virtual simulation displayed on a computer screen, offering clear video instructions and realtime guidance. Results demonstrated that participants trained with the Smart Pipette custom haptic exhibited increased accuracy and precision while making fewer errors than those trained with off-the-shelf hardware. The Smart Pipette and the Quest VR controller had no significant differences in cognitive load and system usability scores. Tactile authentic interaction devices address challenges faced by online learners, while their applicability extends to traditional classrooms, where real-time feedback significantly enhances overall training performance outcomes.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142367962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Client-Designer Negotiation in Data Visualization Projects. 数据可视化项目中的客户与设计者谈判。
Pub Date : 2024-10-02 DOI: 10.1109/TVCG.2024.3467189
Elsie Lee-Robbins, Arran Ridley, Eytan Adar

Data visualization designers and clients need to communicate effectively with each other to achieve a successful project. Unlike a personal or solo project, working with a client introduces a layer of complexity to the process. Client and designer might have different ideas about what is an acceptable solution that would satisfy the goals and constraints of the project. Thus, the client-designer relationship is an important part of the design process. To better understand the relationship, we conducted an interview study with 12 data visualization designers. We develop a model of a client-designer project space consisting of three aspects: surfacing project goals, agreeing on resource allocation, and creating a successful design. For each aspect, designer and client have their own mental model of how they envision the project. Disagreements between these models can be resolved by negotiation that brings them closer to alignment. We identified three main negotiation strategies to navigate the project space: 1) expanding the project space to consider more potential options, 2) constraining the project space to narrow in on the boundaries, and 3) shifting the project space to different options. We discuss client-designer collaboration as a negotiated relationship, with opportunities and challenges for each side. We suggest ways to mitigate challenges to avoid friction from developing into conflict.

数据可视化设计师和客户之间需要进行有效的沟通,以实现项目的成功。与个人或单打独斗的项目不同,与客户合作会给整个过程带来一层复杂性。客户和设计师可能对满足项目目标和限制的可接受解决方案有不同的看法。因此,客户与设计师的关系是设计过程的重要组成部分。为了更好地理解这种关系,我们对 12 位数据可视化设计师进行了访谈研究。我们建立了一个客户-设计师项目空间模型,该模型由三个方面组成:项目目标的浮现、资源分配的商定和成功设计的创建。对于每个方面,设计师和客户都有自己的心智模型,说明他们是如何设想项目的。这些模型之间的分歧可以通过谈判来解决,从而使它们更接近一致。我们确定了三种浏览项目空间的主要协商策略:1)扩大项目空间,以考虑更多潜在选项;2)限制项目空间,以缩小边界;3)将项目空间转移到不同选项上。我们将客户与设计师的合作视为一种协商关系,双方都面临机遇和挑战。我们提出了缓解挑战的方法,以避免摩擦发展成冲突。
{"title":"Client-Designer Negotiation in Data Visualization Projects.","authors":"Elsie Lee-Robbins, Arran Ridley, Eytan Adar","doi":"10.1109/TVCG.2024.3467189","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3467189","url":null,"abstract":"<p><p>Data visualization designers and clients need to communicate effectively with each other to achieve a successful project. Unlike a personal or solo project, working with a client introduces a layer of complexity to the process. Client and designer might have different ideas about what is an acceptable solution that would satisfy the goals and constraints of the project. Thus, the client-designer relationship is an important part of the design process. To better understand the relationship, we conducted an interview study with 12 data visualization designers. We develop a model of a client-designer project space consisting of three aspects: surfacing project goals, agreeing on resource allocation, and creating a successful design. For each aspect, designer and client have their own mental model of how they envision the project. Disagreements between these models can be resolved by negotiation that brings them closer to alignment. We identified three main negotiation strategies to navigate the project space: 1) expanding the project space to consider more potential options, 2) constraining the project space to narrow in on the boundaries, and 3) shifting the project space to different options. We discuss client-designer collaboration as a negotiated relationship, with opportunities and challenges for each side. We suggest ways to mitigate challenges to avoid friction from developing into conflict.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142367961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VMC: A Grammar for Visualizing Statistical Model Checks. VMC:统计模型检查可视化语法。
Pub Date : 2024-09-30 DOI: 10.1109/TVCG.2024.3456402
Ziyang Guo, Alex Kale, Matthew Kay, Jessica Hullman

Visualizations play a critical role in validating and improving statistical models. However, the design space of model check visualizations is not well understood, making it difficult for authors to explore and specify effective graphical model checks. VMC defines a model check visualization using four components: (1) samples of distributions of checkable quantities generated from the model, including predictive distributions for new data and distributions of model parameters; (2) transformations on observed data to facilitate comparison; (3) visual representations of distributions; and (4) layouts to facilitate comparing model samples and observed data. We contribute an implementation of VMC as an R package. We validate VMC by reproducing a set of canonical model check examples, and show how using VMC to generate model checks reduces the edit distance between visualizations relative to existing visualization toolkits. The findings of an interview study with three expert modelers who used VMC highlight challenges and opportunities for encouraging exploration of correct, effective model check visualizations.

可视化在验证和改进统计模型方面发挥着至关重要的作用。然而,由于对模型检查可视化的设计空间了解不多,因此作者很难探索和指定有效的图形模型检查。VMC 使用四个组件定义了模型检查可视化:(1) 由模型生成的可检查量的分布样本,包括新数据的预测分布和模型参数的分布;(2) 对观测数据进行转换以方便比较;(3) 分布的可视化表示;(4) 布局以方便比较模型样本和观测数据。我们以 R 软件包的形式提供了 VMC 的实现。我们通过重现一组典型模型检查示例验证了 VMC,并展示了使用 VMC 生成模型检查如何相对于现有可视化工具包减少了可视化之间的编辑距离。我们对三位使用 VMC 的建模专家进行了访谈研究,研究结果凸显了鼓励探索正确、有效的模型检查可视化所面临的挑战和机遇。
{"title":"VMC: A Grammar for Visualizing Statistical Model Checks.","authors":"Ziyang Guo, Alex Kale, Matthew Kay, Jessica Hullman","doi":"10.1109/TVCG.2024.3456402","DOIUrl":"10.1109/TVCG.2024.3456402","url":null,"abstract":"<p><p>Visualizations play a critical role in validating and improving statistical models. However, the design space of model check visualizations is not well understood, making it difficult for authors to explore and specify effective graphical model checks. VMC defines a model check visualization using four components: (1) samples of distributions of checkable quantities generated from the model, including predictive distributions for new data and distributions of model parameters; (2) transformations on observed data to facilitate comparison; (3) visual representations of distributions; and (4) layouts to facilitate comparing model samples and observed data. We contribute an implementation of VMC as an R package. We validate VMC by reproducing a set of canonical model check examples, and show how using VMC to generate model checks reduces the edit distance between visualizations relative to existing visualization toolkits. The findings of an interview study with three expert modelers who used VMC highlight challenges and opportunities for encouraging exploration of correct, effective model check visualizations.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large Language Models for Transforming Categorical Data to Interpretable Feature Vectors. 将分类数据转换为可解释特征矢量的大型语言模型。
Pub Date : 2024-09-30 DOI: 10.1109/TVCG.2024.3460652
Karim Huesmann, Lars Linsen

When analyzing heterogeneous data comprising numerical and categorical attributes, it is common to treat the different data types separately or transform the categorical attributes to numerical ones. The transformation has the advantage of facilitating an integrated multi-variate analysis of all attributes. We propose a novel technique for transforming categorical data into interpretable numerical feature vectors using Large Language Models (LLMs). The LLMs are used to identify the categorical attributes' main characteristics and assign numerical values to these characteristics, thus generating a multi-dimensional feature vector. The transformation can be computed fully automatically, but due to the interpretability of the characteristics, it can also be adjusted intuitively by an end user. We provide a respective interactive tool that aims to validate and possibly improve the AI-generated outputs. Having transformed a categorical attribute, we propose novel methods for ordering and color-coding the categories based on the similarities of the feature vectors.

在分析包含数字和分类属性的异构数据时,通常会将不同类型的数据分开处理,或将分类属性转换为数字属性。这种转换的优点是便于对所有属性进行综合多变量分析。我们提出了一种使用大型语言模型(LLM)将分类数据转换为可解释的数字特征向量的新技术。大型语言模型用于识别分类属性的主要特征,并为这些特征赋予数值,从而生成多维特征向量。转换可以完全自动计算,但由于特征的可解释性,最终用户也可以直观地进行调整。我们提供了一个互动工具,旨在验证并改进人工智能生成的输出结果。在对分类属性进行转换后,我们提出了根据特征向量的相似性对类别进行排序和颜色编码的新方法。
{"title":"Large Language Models for Transforming Categorical Data to Interpretable Feature Vectors.","authors":"Karim Huesmann, Lars Linsen","doi":"10.1109/TVCG.2024.3460652","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3460652","url":null,"abstract":"<p><p>When analyzing heterogeneous data comprising numerical and categorical attributes, it is common to treat the different data types separately or transform the categorical attributes to numerical ones. The transformation has the advantage of facilitating an integrated multi-variate analysis of all attributes. We propose a novel technique for transforming categorical data into interpretable numerical feature vectors using Large Language Models (LLMs). The LLMs are used to identify the categorical attributes' main characteristics and assign numerical values to these characteristics, thus generating a multi-dimensional feature vector. The transformation can be computed fully automatically, but due to the interpretability of the characteristics, it can also be adjusted intuitively by an end user. We provide a respective interactive tool that aims to validate and possibly improve the AI-generated outputs. Having transformed a categorical attribute, we propose novel methods for ordering and color-coding the categories based on the similarities of the feature vectors.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tooth Motion Monitoring in Orthodontic Treatment by Mobile Device-based Multi-view Stereo. 通过基于移动设备的多视角立体摄影监测正畸治疗中的牙齿运动。
Pub Date : 2024-09-30 DOI: 10.1109/TVCG.2024.3470992
Jiaming Xie, Congyi Zhang, Guangshun Wei, Peng Wang, Guodong Wei, Wenxi Liu, Min Gu, Ping Luo, Wenping Wang

Nowadays, orthodontics has become an important part of modern personal life to assist one in improving mastication and raising self-esteem. However, the quality of orthodontic treatment still heavily relies on the empirical evaluation of experienced doctors, which lacks quantitative assessment and requires patients to visit clinics frequently for in-person examination. To resolve the aforementioned problem, we propose a novel and practical mobile device-based framework for precisely measuring tooth movement in treatment, so as to simplify and strengthen the traditional tooth monitoring process. To this end, we formulate the tooth movement monitoring task as a multi-view multi-object pose estimation problem via different views that capture multiple texture-less and severely occluded objects (i.e. teeth). Specifically, we exploit a pre-scanned 3D tooth model and a sparse set of multi-view tooth images as inputs for our proposed tooth monitoring framework. After extracting tooth contours and localizing the initial camera pose of each view from the initial configuration, we propose a joint pose estimation scheme to precisely estimate the 3D pose of each individual tooth, so as to infer their relative offsets during treatment. Furthermore, we introduce the metric of Relative Pose Bias to evaluate the individual tooth pose accuracy in a small scale. We demonstrate that our approach is capable of reaching high accuracy and efficiency as practical orthodontic treatment monitoring requires.

如今,牙齿矫正已成为现代个人生活的重要组成部分,可帮助人们改善咀嚼功能,提高自尊心。然而,正畸治疗的质量在很大程度上仍依赖于经验丰富的医生的经验评价,缺乏量化评估,患者需要经常到诊所进行当面检查。为了解决上述问题,我们提出了一种新颖实用的基于移动设备的框架,用于精确测量治疗过程中的牙齿移动情况,从而简化和强化传统的牙齿监测过程。为此,我们将牙齿移动监测任务表述为一个多视角多物体姿态估计问题,通过不同视角捕捉多个无纹理和严重闭塞的物体(即牙齿)。具体来说,我们利用预先扫描的三维牙齿模型和稀疏的多视角牙齿图像集作为我们提出的牙齿监测框架的输入。从初始配置中提取牙齿轮廓并定位每个视图的初始相机姿态后,我们提出了一种联合姿态估计方案,以精确估计每颗牙齿的三维姿态,从而推断出它们在治疗过程中的相对偏移。此外,我们还引入了 "相对姿势偏差 "指标,以评估小范围内单个牙齿姿势的准确性。我们证明了我们的方法能够达到实际正畸治疗监控所需的高精度和高效率。
{"title":"Tooth Motion Monitoring in Orthodontic Treatment by Mobile Device-based Multi-view Stereo.","authors":"Jiaming Xie, Congyi Zhang, Guangshun Wei, Peng Wang, Guodong Wei, Wenxi Liu, Min Gu, Ping Luo, Wenping Wang","doi":"10.1109/TVCG.2024.3470992","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3470992","url":null,"abstract":"<p><p>Nowadays, orthodontics has become an important part of modern personal life to assist one in improving mastication and raising self-esteem. However, the quality of orthodontic treatment still heavily relies on the empirical evaluation of experienced doctors, which lacks quantitative assessment and requires patients to visit clinics frequently for in-person examination. To resolve the aforementioned problem, we propose a novel and practical mobile device-based framework for precisely measuring tooth movement in treatment, so as to simplify and strengthen the traditional tooth monitoring process. To this end, we formulate the tooth movement monitoring task as a multi-view multi-object pose estimation problem via different views that capture multiple texture-less and severely occluded objects (i.e. teeth). Specifically, we exploit a pre-scanned 3D tooth model and a sparse set of multi-view tooth images as inputs for our proposed tooth monitoring framework. After extracting tooth contours and localizing the initial camera pose of each view from the initial configuration, we propose a joint pose estimation scheme to precisely estimate the 3D pose of each individual tooth, so as to infer their relative offsets during treatment. Furthermore, we introduce the metric of Relative Pose Bias to evaluate the individual tooth pose accuracy in a small scale. We demonstrate that our approach is capable of reaching high accuracy and efficiency as practical orthodontic treatment monitoring requires.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HUMAP: Hierarchical Uniform Manifold Approximation and Projection. HUMAP:分层统一曲面逼近和投影。
Pub Date : 2024-09-30 DOI: 10.1109/TVCG.2024.3471181
Wilson E Marcilio-Jr, Danilo M Eler, Fernando V Paulovich, Rafael M Martins

Dimensionality reduction (DR) techniques help analysts to understand patterns in high-dimensional spaces. These techniques, often represented by scatter plots, are employed in diverse science domains and facilitate similarity analysis among clusters and data samples. For datasets containing many granularities or when analysis follows the information visualization mantra, hierarchical DR techniques are the most suitable approach since they present major structures beforehand and details on demand. This work presents HUMAP, a novel hierarchical dimensionality reduction technique designed to be flexible on preserving local and global structures and preserve the mental map throughout hierarchical exploration. We provide empirical evidence of our technique's superiority compared with current hierarchical approaches and show a case study applying HUMAP for dataset labelling.

降维(DR)技术有助于分析人员了解高维空间中的模式。这些技术通常以散点图为代表,应用于不同的科学领域,有助于对数据集群和数据样本进行相似性分析。对于包含多种粒度的数据集,或者当分析遵循信息可视化原则时,分层 DR 技术是最合适的方法,因为它们能事先呈现主要结构,并根据需求呈现细节。本研究提出的 HUMAP 是一种新型分层降维技术,旨在灵活地保留局部和全局结构,并在整个分层探索过程中保留心理地图。我们提供了实证证据,证明我们的技术优于当前的分层方法,并展示了将 HUMAP 应用于数据集标注的案例研究。
{"title":"HUMAP: Hierarchical Uniform Manifold Approximation and Projection.","authors":"Wilson E Marcilio-Jr, Danilo M Eler, Fernando V Paulovich, Rafael M Martins","doi":"10.1109/TVCG.2024.3471181","DOIUrl":"10.1109/TVCG.2024.3471181","url":null,"abstract":"<p><p>Dimensionality reduction (DR) techniques help analysts to understand patterns in high-dimensional spaces. These techniques, often represented by scatter plots, are employed in diverse science domains and facilitate similarity analysis among clusters and data samples. For datasets containing many granularities or when analysis follows the information visualization mantra, hierarchical DR techniques are the most suitable approach since they present major structures beforehand and details on demand. This work presents HUMAP, a novel hierarchical dimensionality reduction technique designed to be flexible on preserving local and global structures and preserve the mental map throughout hierarchical exploration. We provide empirical evidence of our technique's superiority compared with current hierarchical approaches and show a case study applying HUMAP for dataset labelling.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Iguanodon: A Code-Breaking Game for Improving Visualization Construction Literacy. Iguanodon:提高可视化建设素养的解码游戏。
Pub Date : 2024-09-27 DOI: 10.1109/TVCG.2024.3468948
Patrick Adelberger, Oleg Lesota, Klaus Eckelt, Markus Schedl, Marc Streit

In today's data-rich environment, visualization literacy-the ability to understand and communicate information through charts-is increasingly important. However, constructing effective charts can be challenging due to the numerous design choices involved. Off-the-shelf systems and libraries produce charts with carefully selected defaults that users may not be aware of, making it hard to increase their visualization literacy with those systems. In addition, traditional ways of improving visualization literacy, such as textbooks and tutorials, can be burdensome as they require sifting through a plethora of resources. To address this challenge, we designed Iguanodon, an easy-to-use game application that complements the traditional methods of improving visualization construction literacy. In our game application, users interactively choose whether to apply design choices, which we assign to sub-tasks that must be optimized to create an effective chart. The application offers multiple game variations to help users learn how different design choices should be applied to construct effective charts. Furthermore, our approach easily adapts to different visualization design guidelines. We describe the application's design and present the results of a user study with 37 participants. Our findings indicate that our game-based approach supports users in improving their visualization literacy.

在当今数据丰富的环境中,可视化素养--通过图表理解和交流信息的能力--变得越来越重要。然而,由于涉及众多设计选择,构建有效的图表可能极具挑战性。现成的系统和图库在制作图表时都会精心选择用户可能不知道的默认值,因此很难通过这些系统提高用户的可视化素养。此外,提高可视化素养的传统方法,如教科书和教程,也会因为需要筛选大量资源而造成负担。为了应对这一挑战,我们设计了一款简单易用的游戏应用程序 "Iguanodon",作为提高可视化构建素养传统方法的补充。在我们的游戏应用程序中,用户交互式地选择是否应用设计选项,我们将这些选项分配给必须优化才能创建有效图表的子任务。该应用程序提供多种游戏变化,帮助用户学习如何应用不同的设计选择来构建有效的图表。此外,我们的方法还能轻松适应不同的可视化设计准则。我们介绍了该应用程序的设计,并展示了对 37 名参与者进行的用户研究的结果。研究结果表明,我们基于游戏的方法有助于用户提高可视化素养。
{"title":"Iguanodon: A Code-Breaking Game for Improving Visualization Construction Literacy.","authors":"Patrick Adelberger, Oleg Lesota, Klaus Eckelt, Markus Schedl, Marc Streit","doi":"10.1109/TVCG.2024.3468948","DOIUrl":"10.1109/TVCG.2024.3468948","url":null,"abstract":"<p><p>In today's data-rich environment, visualization literacy-the ability to understand and communicate information through charts-is increasingly important. However, constructing effective charts can be challenging due to the numerous design choices involved. Off-the-shelf systems and libraries produce charts with carefully selected defaults that users may not be aware of, making it hard to increase their visualization literacy with those systems. In addition, traditional ways of improving visualization literacy, such as textbooks and tutorials, can be burdensome as they require sifting through a plethora of resources. To address this challenge, we designed Iguanodon, an easy-to-use game application that complements the traditional methods of improving visualization construction literacy. In our game application, users interactively choose whether to apply design choices, which we assign to sub-tasks that must be optimized to create an effective chart. The application offers multiple game variations to help users learn how different design choices should be applied to construct effective charts. Furthermore, our approach easily adapts to different visualization design guidelines. We describe the application's design and present the results of a user study with 37 participants. Our findings indicate that our game-based approach supports users in improving their visualization literacy.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Field of View Restriction and Snap Turning as Cybersickness Mitigation Tools. 视场限制和急转弯作为缓解晕机的工具。
Pub Date : 2024-09-27 DOI: 10.1109/TVCG.2024.3470214
Jonathan W Kelly, Taylor A Doty, Stephen B Gilbert, Michael C Dorneich

Multiple tools are available to reduce cybersickness (sickness caused by virtual reality), but past research has not investigated the combined effects of multiple mitigation tools. Field of view (FOV) restriction limits peripheral vision during self-motion, and ample evidence supports its effectiveness for reducing cybersickness. Snap turning involves discrete rotations of the user's perspective without presenting intermediate views, although reports on its effectiveness at reducing cybersickness are limited and equivocal. Both mitigation tools reduce the visual motion that can cause cybersickness. The current study (N = 201) investigated the individual and combined effects of FOV restriction and snap turning on cybersickness when playing a consumer virtual reality game. FOV restriction and snap turning in isolation reduced cybersickness compared to a control condition without mitigation tools. Yet, the combination of FOV restriction and snap turning did not further reduce cybersickness beyond the individual tools in isolation, and in some cases the combination of tools led to cybersickness similar to that in the no mitigation control. These results indicate that caution is warranted when combining multiple cybersickness mitigation tools, which can interact in unexpected ways.

目前有多种工具可用于减轻网络晕眩(由虚拟现实引起的晕眩),但过去的研究并未对多种缓解工具的综合效果进行调查。视场角(FOV)限制可在自我移动过程中限制外围视线,大量证据表明该方法可有效减轻晕眩感。快转是指在不显示中间视图的情况下,对用户视角进行离散旋转,但有关其减轻晕机效果的报告有限,而且模棱两可。这两种缓解工具都能减少可能导致晕机的视觉运动。目前的研究(N = 201)调查了在玩一款消费类虚拟现实游戏时,视场角限制和快速转动对晕机的单独和综合影响。与不使用缓解工具的对照组相比,单独使用视场角限制和快速转动可减轻晕眩感。然而,结合使用视场角限制和急转弯并不能进一步减轻晕眩感,甚至在某些情况下,结合使用这两种工具所产生的晕眩感与不使用缓解工具的对照组相似。这些结果表明,在结合使用多种缓解晕机的工具时需要谨慎,因为这些工具可能会以意想不到的方式相互作用。
{"title":"Field of View Restriction and Snap Turning as Cybersickness Mitigation Tools.","authors":"Jonathan W Kelly, Taylor A Doty, Stephen B Gilbert, Michael C Dorneich","doi":"10.1109/TVCG.2024.3470214","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3470214","url":null,"abstract":"<p><p>Multiple tools are available to reduce cybersickness (sickness caused by virtual reality), but past research has not investigated the combined effects of multiple mitigation tools. Field of view (FOV) restriction limits peripheral vision during self-motion, and ample evidence supports its effectiveness for reducing cybersickness. Snap turning involves discrete rotations of the user's perspective without presenting intermediate views, although reports on its effectiveness at reducing cybersickness are limited and equivocal. Both mitigation tools reduce the visual motion that can cause cybersickness. The current study (N = 201) investigated the individual and combined effects of FOV restriction and snap turning on cybersickness when playing a consumer virtual reality game. FOV restriction and snap turning in isolation reduced cybersickness compared to a control condition without mitigation tools. Yet, the combination of FOV restriction and snap turning did not further reduce cybersickness beyond the individual tools in isolation, and in some cases the combination of tools led to cybersickness similar to that in the no mitigation control. These results indicate that caution is warranted when combining multiple cybersickness mitigation tools, which can interact in unexpected ways.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Simulation-based Approach for Quantifying the Impact of Interactive Label Correction for Machine Learning. 基于仿真的方法,量化交互式标签校正对机器学习的影响。
Pub Date : 2024-09-26 DOI: 10.1109/TVCG.2024.3468352
Yixuan Wang, Jieqiong Zhao, Jiayi Hong, Ronald G Askin, Ross Maciejewski

Recent years have witnessed growing interest in understanding the sensitivity of machine learning to training data characteristics. While researchers have claimed the benefits of activities such as a human-in-the-loop approach of interactive label correction for improving model performance, there have been limited studies to quantitatively probe the relationship between the cost of label correction and the associated benefit in model performance. We employ a simulation-based approach to explore the efficacy of label correction under diverse task conditions, namely different datasets, noise properties, and machine learning algorithms. We measure the impact of label correction on model performance under the best-case scenario assumption: perfect correction (perfect human and visual systems), serving as an upper-bound estimation of the benefits derived from visual interactive label correction. The simulation results reveal a trade-off between the label correction effort expended and model performance improvement. Notably, task conditions play a crucial role in shaping the trade-off. Based on the simulation results, we develop a set of recommendations to help practitioners determine conditions under which interactive label correction is an effective mechanism for improving model performance.

近年来,人们越来越关注了解机器学习对训练数据特征的敏感性。虽然研究人员声称人在回路中的交互式标签校正方法等活动对提高模型性能有好处,但对标签校正成本与模型性能相关收益之间的关系进行定量探究的研究还很有限。我们采用了一种基于模拟的方法来探索标签校正在不同任务条件下的功效,即不同的数据集、噪声属性和机器学习算法。我们测量了在最佳情况假设下标签校正对模型性能的影响:完美校正(完美的人类和视觉系统),作为对视觉交互式标签校正所带来的好处的上限估计。模拟结果揭示了标签校正工作量与模型性能改善之间的权衡。值得注意的是,任务条件在权衡中起着至关重要的作用。基于模拟结果,我们提出了一系列建议,以帮助实践者确定在哪些条件下交互式标签校正是提高模型性能的有效机制。
{"title":"A Simulation-based Approach for Quantifying the Impact of Interactive Label Correction for Machine Learning.","authors":"Yixuan Wang, Jieqiong Zhao, Jiayi Hong, Ronald G Askin, Ross Maciejewski","doi":"10.1109/TVCG.2024.3468352","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3468352","url":null,"abstract":"<p><p>Recent years have witnessed growing interest in understanding the sensitivity of machine learning to training data characteristics. While researchers have claimed the benefits of activities such as a human-in-the-loop approach of interactive label correction for improving model performance, there have been limited studies to quantitatively probe the relationship between the cost of label correction and the associated benefit in model performance. We employ a simulation-based approach to explore the efficacy of label correction under diverse task conditions, namely different datasets, noise properties, and machine learning algorithms. We measure the impact of label correction on model performance under the best-case scenario assumption: perfect correction (perfect human and visual systems), serving as an upper-bound estimation of the benefits derived from visual interactive label correction. The simulation results reveal a trade-off between the label correction effort expended and model performance improvement. Notably, task conditions play a crucial role in shaping the trade-off. Based on the simulation results, we develop a set of recommendations to help practitioners determine conditions under which interactive label correction is an effective mechanism for improving model performance.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1