首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
2024 VGTC Visualization Lifetime Achievement Award 2024 VGTC 可视化终身成就奖
Pub Date : 2024-11-22 DOI: 10.1109/TVCG.2024.3473189
Hans-Christian Hege;Min Chen
The 2023 VGTC Visualization Lifetime Achievement Award goes to Hans-Christian Hege for his fundamental technical contributions to visualization and visualization software with a focus on applications in the natural sciences, medicine and engineering.
2023年VGTC可视化终身成就奖授予汉斯-克里斯蒂安-黑格(Hans-Christian Hege),以表彰他在可视化和可视化软件方面做出的基础性技术贡献,重点是自然科学、医学和工程学领域的应用。
{"title":"2024 VGTC Visualization Lifetime Achievement Award","authors":"Hans-Christian Hege;Min Chen","doi":"10.1109/TVCG.2024.3473189","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3473189","url":null,"abstract":"The 2023 VGTC Visualization Lifetime Achievement Award goes to Hans-Christian Hege for his fundamental technical contributions to visualization and visualization software with a focus on applications in the natural sciences, medicine and engineering.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"xxvii-xxviii"},"PeriodicalIF":0.0,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10766347","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualization-Driven Illumination for Density Plots. 可视化驱动的密度图照明。
Pub Date : 2024-11-11 DOI: 10.1109/TVCG.2024.3495695
Xin Chen, Yunhai Wang, Huaiwei Bao, Kecheng Lu, Jaemin Jo, Chi-Wing Fu, Jean-Daniel Fekete

We present a novel visualization-driven illumination model for density plots, a new technique to enhance density plots by effectively revealing the detailed structures in high- and medium-density regions and outliers in low-density regions, while avoiding artifacts in the density field's colors. When visualizing large and dense discrete point samples, scatterplots and dot density maps often suffer from overplotting, and density plots are commonly employed to provide aggregated views while revealing underlying structures. Yet, in such density plots, existing illumination models may produce color distortion and hide details in low-density regions, making it challenging to look up density values, compare them, and find outliers. The key novelty in this work includes (i) a visualization-driven illumination model that inherently supports density-plot-specific analysis tasks and (ii) a new image composition technique to reduce the interference between the image shading and the color-encoded density values. To demonstrate the effectiveness of our technique, we conducted a quantitative study, an empirical evaluation of our technique in a controlled study, and two case studies, exploring twelve datasets with up to two million data point samples.

我们为密度图提出了一种新颖的可视化驱动光照模型,这是一种增强密度图的新技术,它能有效揭示高密度和中等密度区域的细节结构以及低密度区域的异常值,同时避免密度场的颜色出现假象。在可视化大型高密度离散点样本时,散点图和点密度图往往会出现过度绘制的问题,而密度图通常用于提供聚合视图,同时揭示底层结构。然而,在这类密度图中,现有的光照模型可能会产生色彩失真,掩盖低密度区域的细节,从而使查找密度值、比较密度值和发现异常值变得十分困难。这项工作的主要创新点包括:(i) 一种可视化驱动的照明模型,该模型本质上支持密度图特定的分析任务;(ii) 一种新的图像合成技术,可减少图像阴影和彩色编码密度值之间的干扰。为了证明我们技术的有效性,我们进行了一项定量研究、一项对照研究中的经验评估以及两项案例研究,探索了多达 200 万个数据点样本的 12 个数据集。
{"title":"Visualization-Driven Illumination for Density Plots.","authors":"Xin Chen, Yunhai Wang, Huaiwei Bao, Kecheng Lu, Jaemin Jo, Chi-Wing Fu, Jean-Daniel Fekete","doi":"10.1109/TVCG.2024.3495695","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3495695","url":null,"abstract":"<p><p>We present a novel visualization-driven illumination model for density plots, a new technique to enhance density plots by effectively revealing the detailed structures in high- and medium-density regions and outliers in low-density regions, while avoiding artifacts in the density field's colors. When visualizing large and dense discrete point samples, scatterplots and dot density maps often suffer from overplotting, and density plots are commonly employed to provide aggregated views while revealing underlying structures. Yet, in such density plots, existing illumination models may produce color distortion and hide details in low-density regions, making it challenging to look up density values, compare them, and find outliers. The key novelty in this work includes (i) a visualization-driven illumination model that inherently supports density-plot-specific analysis tasks and (ii) a new image composition technique to reduce the interference between the image shading and the color-encoded density values. To demonstrate the effectiveness of our technique, we conducted a quantitative study, an empirical evaluation of our technique in a controlled study, and two case studies, exploring twelve datasets with up to two million data point samples.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142635142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the Potential of Haptic Props for 3D Object Manipulation in Handheld AR. 探究手持式 AR 中触觉道具对 3D 物体操作的潜力
Pub Date : 2024-11-11 DOI: 10.1109/TVCG.2024.3495021
Jonathan Wieland, Maximilian Durr, Rebecca Frisch, Melissa Michalke, Dominik Morgenstern, Harald Reiterer, Tiare Feuchtner

The manipulation of virtual 3D objects is essential for a variety of handheld AR scenarios. However, the mapping of commonly supported 2D touch gestures to manipulations in 3D space is not trivial. As an alternative, our work explores the use of haptic props that facilitate direct manipulation of virtual 3D objects with 6 degrees of freedom. In an experiment, we instructed 20 participants to solve 2D and 3D docking tasks in AR, to compare traditional 2D touch gestures with prop-based interactions using three prop shapes (cube, rhombicuboctahedron, sphere). Our findings highlight benefits of haptic props for 3D manipulation tasks with respect to task performance, user experience, preference, and workload. For 2D tasks, the benefits of haptic props are less pronounced. Finally, while we found no significant impact of prop shape on task performance, this appears to be subject to personal preference.

虚拟三维物体的操作对于各种手持式 AR 应用场景至关重要。然而,将通常支持的二维触摸手势映射到三维空间中的操作并非易事。作为一种替代方法,我们的工作探索了触觉道具的使用,这种道具可以促进以 6 个自由度直接操控虚拟三维物体。在一项实验中,我们指导 20 名参与者在 AR 中完成二维和三维对接任务,比较传统的二维触摸手势和使用三种道具形状(立方体、菱形八面体和球体)的基于道具的交互。我们的研究结果凸显了触觉道具在三维操作任务中的优势,包括任务性能、用户体验、偏好和工作量。而对于二维任务,触觉道具的优势就不那么明显了。最后,虽然我们发现道具形状对任务表现没有显著影响,但这似乎取决于个人偏好。
{"title":"Investigating the Potential of Haptic Props for 3D Object Manipulation in Handheld AR.","authors":"Jonathan Wieland, Maximilian Durr, Rebecca Frisch, Melissa Michalke, Dominik Morgenstern, Harald Reiterer, Tiare Feuchtner","doi":"10.1109/TVCG.2024.3495021","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3495021","url":null,"abstract":"<p><p>The manipulation of virtual 3D objects is essential for a variety of handheld AR scenarios. However, the mapping of commonly supported 2D touch gestures to manipulations in 3D space is not trivial. As an alternative, our work explores the use of haptic props that facilitate direct manipulation of virtual 3D objects with 6 degrees of freedom. In an experiment, we instructed 20 participants to solve 2D and 3D docking tasks in AR, to compare traditional 2D touch gestures with prop-based interactions using three prop shapes (cube, rhombicuboctahedron, sphere). Our findings highlight benefits of haptic props for 3D manipulation tasks with respect to task performance, user experience, preference, and workload. For 2D tasks, the benefits of haptic props are less pronounced. Finally, while we found no significant impact of prop shape on task performance, this appears to be subject to personal preference.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142635138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
"where Did My Apps Go?" Supporting Scalable and Transition-Aware Access to Everyday Applications in Head-Worn Augmented Reality. "我的应用程序去哪儿了?在头戴式增强现实中支持可扩展和过渡感知的日常应用访问。
Pub Date : 2024-11-08 DOI: 10.1109/TVCG.2024.3493115
Feiyu Lu, Leonardo Pavanatto, Shakiba Davari, Lei Zhang, Lee Lisle, Doug A Bowman

Future augmented reality (AR) glasses empower users to view personal applications and services anytime and anywhere without being restricted by physical locations and the availability of physical screens. In typical everyday activities, people move around to carry out different tasks and need a variety of information on the go. Existing interfaces in AR do not support these use cases well, especially when the number of applications increases. We explore the usability of three world-referenced approaches that move AR applications with users as they transition among different locations, featuring different levels of AR app availability: (1) always using a menu to manually open an app when needed; (2) automatically suggesting a relevant subset of all apps; and (3) carrying all apps with the users to the new location. Through a controlled study and a relatively more ecologically-valid study in AR, we reached better understandings on the performance trade-offs and observed the impact of various everyday contextual factors on these interfaces in more realistic AR settings. Our results shed light on how to better support the mobile information needs of users in everyday life in future AR interfaces.

未来的增强现实(AR)眼镜能让用户随时随地查看个人应用程序和服务,而不受物理位置和物理屏幕的限制。在典型的日常活动中,人们会四处走动以执行不同的任务,并需要随时随地获取各种信息。现有的 AR 界面不能很好地支持这些用例,尤其是当应用程序数量增加时。我们探索了三种以世界为参照的方法的可用性,当用户在不同地点转换时,AR 应用程序会随之移动,这些方法具有不同程度的 AR 应用程序可用性:(1)在需要时始终使用菜单手动打开应用程序;(2)自动推荐所有应用程序的相关子集;以及(3)将所有应用程序随用户带到新地点。通过一项对照研究和一项相对更具生态有效性的 AR 研究,我们对性能权衡有了更好的理解,并在更真实的 AR 环境中观察了各种日常情境因素对这些界面的影响。我们的研究结果为如何在未来的 AR 界面中更好地支持用户在日常生活中的移动信息需求提供了启示。
{"title":"\"where Did My Apps Go?\" Supporting Scalable and Transition-Aware Access to Everyday Applications in Head-Worn Augmented Reality.","authors":"Feiyu Lu, Leonardo Pavanatto, Shakiba Davari, Lei Zhang, Lee Lisle, Doug A Bowman","doi":"10.1109/TVCG.2024.3493115","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3493115","url":null,"abstract":"<p><p>Future augmented reality (AR) glasses empower users to view personal applications and services anytime and anywhere without being restricted by physical locations and the availability of physical screens. In typical everyday activities, people move around to carry out different tasks and need a variety of information on the go. Existing interfaces in AR do not support these use cases well, especially when the number of applications increases. We explore the usability of three world-referenced approaches that move AR applications with users as they transition among different locations, featuring different levels of AR app availability: (1) always using a menu to manually open an app when needed; (2) automatically suggesting a relevant subset of all apps; and (3) carrying all apps with the users to the new location. Through a controlled study and a relatively more ecologically-valid study in AR, we reached better understandings on the performance trade-offs and observed the impact of various everyday contextual factors on these interfaces in more realistic AR settings. Our results shed light on how to better support the mobile information needs of users in everyday life in future AR interfaces.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142607703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PGSR: Planar-based Gaussian Splatting for Efficient and High-Fidelity Surface Reconstruction. PGSR:基于平面高斯拼接技术的高效高保真曲面重构。
Pub Date : 2024-11-07 DOI: 10.1109/TVCG.2024.3494046
Danpeng Chen, Hai Li, Weicai Ye, Yifan Wang, Weijian Xie, Shangjin Zhai, Nan Wang, Haomin Liu, Hujun Bao, Guofeng Zhang

Recently, 3D Gaussian Splatting (3DGS) has attracted widespread attention due to its high-quality rendering, and ultra-fast training and rendering speed. However, due to the unstructured and irregular nature of Gaussian point clouds, it is difficult to guarantee geometric reconstruction accuracy and multi-view consistency simply by relying on image reconstruction loss. Although many studies on surface reconstruction based on 3DGS have emerged recently, the quality of their meshes is generally unsatisfactory. To address this problem, we propose a fast planar-based Gaussian splatting reconstruction representation (PGSR) to achieve high-fidelity surface reconstruction while ensuring high-quality rendering. Specifically, we first introduce an unbiased depth rendering method, which directly renders the distance from the camera origin to the Gaussian plane and the corresponding normal map based on the Gaussian distribution of the point cloud, and divides the two to obtain the unbiased depth. We then introduce single-view geometric, multi-view photometric, and geometric regularization to preserve global geometric accuracy. We also propose a camera exposure compensation model to cope with scenes with large illumination variations. Experiments on indoor and outdoor scenes show that the proposed method achieves fast training and rendering while maintaining high-fidelity rendering and geometric reconstruction, outperforming 3DGS-based and NeRF-based methods. Our code will be made publicly available, and more information can be found on our project page (https://zju3dv.github.io/pgsr/).

近来,三维高斯拼接(3DGS)因其高质量的渲染效果、超快的训练和渲染速度而受到广泛关注。然而,由于高斯点云的非结构性和不规则性,单纯依靠图像重建损耗很难保证几何重建精度和多视角一致性。虽然近来出现了许多基于 3DGS 的曲面重建研究,但其网格质量普遍不尽如人意。针对这一问题,我们提出了一种基于平面的快速高斯拼接重建表示法(PGSR),以实现高保真曲面重建,同时确保高质量的渲染效果。具体来说,我们首先引入了一种无偏深度渲染方法,该方法直接渲染相机原点到高斯平面的距离以及基于点云高斯分布的相应法线图,并将二者相除得到无偏深度。然后,我们引入单视角几何、多视角光度和几何正则化来保持全局几何精度。我们还提出了相机曝光补偿模型,以应对光照变化较大的场景。室内和室外场景的实验表明,所提出的方法在保持高保真渲染和几何重建的同时,实现了快速训练和渲染,优于基于 3DGS 和 NeRF 的方法。我们的代码将公开发布,更多信息请访问我们的项目页面(https://zju3dv.github.io/pgsr/)。
{"title":"PGSR: Planar-based Gaussian Splatting for Efficient and High-Fidelity Surface Reconstruction.","authors":"Danpeng Chen, Hai Li, Weicai Ye, Yifan Wang, Weijian Xie, Shangjin Zhai, Nan Wang, Haomin Liu, Hujun Bao, Guofeng Zhang","doi":"10.1109/TVCG.2024.3494046","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3494046","url":null,"abstract":"<p><p>Recently, 3D Gaussian Splatting (3DGS) has attracted widespread attention due to its high-quality rendering, and ultra-fast training and rendering speed. However, due to the unstructured and irregular nature of Gaussian point clouds, it is difficult to guarantee geometric reconstruction accuracy and multi-view consistency simply by relying on image reconstruction loss. Although many studies on surface reconstruction based on 3DGS have emerged recently, the quality of their meshes is generally unsatisfactory. To address this problem, we propose a fast planar-based Gaussian splatting reconstruction representation (PGSR) to achieve high-fidelity surface reconstruction while ensuring high-quality rendering. Specifically, we first introduce an unbiased depth rendering method, which directly renders the distance from the camera origin to the Gaussian plane and the corresponding normal map based on the Gaussian distribution of the point cloud, and divides the two to obtain the unbiased depth. We then introduce single-view geometric, multi-view photometric, and geometric regularization to preserve global geometric accuracy. We also propose a camera exposure compensation model to cope with scenes with large illumination variations. Experiments on indoor and outdoor scenes show that the proposed method achieves fast training and rendering while maintaining high-fidelity rendering and geometric reconstruction, outperforming 3DGS-based and NeRF-based methods. Our code will be made publicly available, and more information can be found on our project page (https://zju3dv.github.io/pgsr/).</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142607704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Dashboard Zoo to Census: A Case Study With Tableau Public. 从仪表盘动物园到人口普查:使用 Tableau Public 的案例研究。
Pub Date : 2024-11-06 DOI: 10.1109/TVCG.2024.3490259
Arjun Srinivasan, Joanna Purich, Michael Correll, Leilani Battle, Vidya Setlur, Anamaria Crisan

Dashboards remain ubiquitous tools for analyzing data and disseminating the findings. Understanding the range of dashboard designs, from simple to complex, can support development of authoring tools that enable end-users to meet their analysis and communication goals. Yet, there has been little work that provides a quantifiable, systematic, and descriptive overview of dashboard design patterns. Instead, existing approaches only consider a handful of designs, which limits the breadth of patterns that can be surfaced. More quantifiable approaches, inspired by machine learning (ML), are presently limited to single visualizations or capture narrow features of dashboard designs. To address this gap, we present an approach for modeling the content and composition of dashboards using a graph representation. The graph decomposes dashboard designs into nodes featuring content "blocks'; and uses edges to model "relationships", such as layout proximity and interaction, between nodes. To demonstrate the utility of this approach, and its extension over prior work, we apply this representation to derive a census of 25,620 dashboards from Tableau Public, providing a descriptive overview of the core building blocks of dashboards in the wild and summarizing prevalent dashboard design patterns. We discuss concrete applications of both a graph representation for dashboard designs and the resulting census to guide the development of dashboard authoring tools, making dashboards accessible, and for leveraging AI/ML techniques. Our findings underscore the importance of meeting users where they are by broadly cataloging dashboard designs, both common and exotic.

仪表盘仍然是分析数据和传播结果的常用工具。了解从简单到复杂的各种仪表盘设计,有助于开发能帮助最终用户实现分析和交流目标的制作工具。然而,对仪表盘设计模式进行量化、系统化和描述性概述的工作还很少。相反,现有的方法只考虑了少数几种设计,这就限制了可浮现的模式的广度。受机器学习(ML)启发的更多量化方法目前仅限于单一的可视化或捕捉仪表盘设计的狭隘特征。为了弥补这一不足,我们提出了一种使用图表示仪表盘内容和组成的建模方法。该图将仪表盘设计分解为以内容 "块 "为特征的节点,并使用边来模拟节点之间的 "关系",如布局接近性和交互性。为了证明这种方法的实用性及其对先前工作的扩展,我们应用这种表示法对 Tableau Public 中的 25,620 个仪表盘进行了普查,提供了对主流仪表盘核心构件的描述性概述,并总结了流行的仪表盘设计模式。我们讨论了仪表盘设计图表表示法的具体应用以及由此产生的普查,以指导仪表盘制作工具的开发、仪表盘的可访问性以及人工智能/ML 技术的利用。我们的发现强调了通过对常见和奇特的仪表盘设计进行广泛编目来满足用户需求的重要性。
{"title":"From Dashboard Zoo to Census: A Case Study With Tableau Public.","authors":"Arjun Srinivasan, Joanna Purich, Michael Correll, Leilani Battle, Vidya Setlur, Anamaria Crisan","doi":"10.1109/TVCG.2024.3490259","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3490259","url":null,"abstract":"<p><p>Dashboards remain ubiquitous tools for analyzing data and disseminating the findings. Understanding the range of dashboard designs, from simple to complex, can support development of authoring tools that enable end-users to meet their analysis and communication goals. Yet, there has been little work that provides a quantifiable, systematic, and descriptive overview of dashboard design patterns. Instead, existing approaches only consider a handful of designs, which limits the breadth of patterns that can be surfaced. More quantifiable approaches, inspired by machine learning (ML), are presently limited to single visualizations or capture narrow features of dashboard designs. To address this gap, we present an approach for modeling the content and composition of dashboards using a graph representation. The graph decomposes dashboard designs into nodes featuring content \"blocks'; and uses edges to model \"relationships\", such as layout proximity and interaction, between nodes. To demonstrate the utility of this approach, and its extension over prior work, we apply this representation to derive a census of 25,620 dashboards from Tableau Public, providing a descriptive overview of the core building blocks of dashboards in the wild and summarizing prevalent dashboard design patterns. We discuss concrete applications of both a graph representation for dashboard designs and the resulting census to guide the development of dashboard authoring tools, making dashboards accessible, and for leveraging AI/ML techniques. Our findings underscore the importance of meeting users where they are by broadly cataloging dashboard designs, both common and exotic.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142591899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Authoring Data-Driven Chart Animations. 制作数据驱动的图表动画
Pub Date : 2024-11-05 DOI: 10.1109/TVCG.2024.3491504
Yuancheng Shen, Yue Zhao, Yunhai Wang, Tong Ge, Haoyan Shi, Bongshin Lee

We present an authoring tool, called CAST+ (Canis Studio Plus), that enables the interactive creation of chart animations through the direct manipulation of keyframes. It introduces the visual specification of chart animations consisting of keyframes that can be played sequentially or simultaneously, and animation parameters (e.g., duration, delay). Building on Canis [1], a declarative chart animation grammar that leverages data-enriched SVG charts, CAST+ supports auto-completion for constructing both keyframes and keyframe sequences. It also enables users to refine the animation specification (e.g., aligning keyframes across tracks to play them together, adjusting delay) with direct manipulation. We report a user study conducted to assess the visual specification and system usability with its initial version. We enhanced the system's expressiveness and usability: CAST+ now supports the animation of multiple types of visual marks in the same keyframe group with new auto-completion algorithms based on generalized selection. This enables the creation of more expressive animations, while reducing the number of interactions needed to create comparable animations. We present a gallery of examples and four usage scenarios to demonstrate the expressiveness of CAST+. Finally, we discuss the limitations, comparison, and potentials of CAST+ as well as directions for future research.

我们介绍了一种名为 CAST+ (Canis Studio Plus)的创作工具,它可以通过直接操作关键帧来交互式创建图表动画。它引入了图表动画的可视化规范,包括可连续或同时播放的关键帧和动画参数(如持续时间、延迟)。CAST+ 基于 Canis [1](一种利用数据丰富的 SVG 图表的声明式图表动画语法),支持自动完成关键帧和关键帧序列的构建。它还能让用户通过直接操作来完善动画规范(例如,跨轨道对齐关键帧以一起播放,调整延迟)。我们报告了一项用户研究,目的是评估可视化规范和初始版本系统的可用性。我们增强了系统的表现力和可用性:CAST+ 现在支持在同一关键帧组中使用多种类型的视觉标记动画,并采用了基于广义选择的新自动完成算法。这样就能创建更具表现力的动画,同时减少创建类似动画所需的交互次数。我们介绍了一系列示例和四个使用场景,以展示 CAST+ 的表现力。最后,我们讨论了 CAST+ 的局限性、比较和潜力,以及未来的研究方向。
{"title":"Authoring Data-Driven Chart Animations.","authors":"Yuancheng Shen, Yue Zhao, Yunhai Wang, Tong Ge, Haoyan Shi, Bongshin Lee","doi":"10.1109/TVCG.2024.3491504","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3491504","url":null,"abstract":"<p><p>We present an authoring tool, called CAST+ (Canis Studio Plus), that enables the interactive creation of chart animations through the direct manipulation of keyframes. It introduces the visual specification of chart animations consisting of keyframes that can be played sequentially or simultaneously, and animation parameters (e.g., duration, delay). Building on Canis [1], a declarative chart animation grammar that leverages data-enriched SVG charts, CAST+ supports auto-completion for constructing both keyframes and keyframe sequences. It also enables users to refine the animation specification (e.g., aligning keyframes across tracks to play them together, adjusting delay) with direct manipulation. We report a user study conducted to assess the visual specification and system usability with its initial version. We enhanced the system's expressiveness and usability: CAST+ now supports the animation of multiple types of visual marks in the same keyframe group with new auto-completion algorithms based on generalized selection. This enables the creation of more expressive animations, while reducing the number of interactions needed to create comparable animations. We present a gallery of examples and four usage scenarios to demonstrate the expressiveness of CAST+. Finally, we discuss the limitations, comparison, and potentials of CAST+ as well as directions for future research.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142585415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Iceberg Sensemaking: A Process Model for Critical Data Analysis. 冰山感知:关键数据分析过程模型。
Pub Date : 2024-11-04 DOI: 10.1109/TVCG.2024.3486613
Charles Berret, Tamara Munzner

We offer a new model of the sensemaking process for data analysis and visualization. Whereas past sensemaking models have been grounded in positivist assumptions about the nature of knowledge, we reframe data sensemaking in critical, humanistic terms by approaching it through an interpretivist lens. Our three-phase process model uses the analogy of an iceberg, where data is the visible tip of underlying schemas. In the Add phase, the analyst acquires data, incorporates explicit schemas from the data, and absorbs the tacit schemas of both data and people. In the Check phase, the analyst interprets the data with respect to the current schemas and evaluates whether the schemas match the data. In the Refine phase, the analyst considers the role of power, articulates what was tacit into explicitly stated schemas, updates data, and formulates findings. Our model has four important distinguishing features: Tacit and Explicit Schemas, Schemas First and Always, Data as a Schematic Artifact, and Schematic Multiplicity. We compare the roles of schemas in past sensemaking models and draw conceptual distinctions based on a historical review of schemas in different academic traditions. We validate the descriptive and prescriptive power of our model through four analysis scenarios: noticing uncollected data, learning to wrangle data, downplaying inconvenient data, and measuring with sensors. We conclude by discussing the value of interpretivism, the virtue of epistemic humility, and the pluralism this sensemaking model can foster.

我们为数据分析和可视化的感知建立过程提供了一个新模型。以往的感知建立模型都是基于实证主义对知识本质的假设,而我们则从批判性和人文主义的角度重新构建数据感知建立模型,通过解释主义的视角来看待它。我们的三阶段流程模型使用了冰山的比喻,数据是潜在图式的可见顶端。在 "添加 "阶段,分析师获取数据,纳入数据中的显性图式,并吸收数据和人的隐性图式。在检查阶段,分析师根据当前模式解释数据,并评估模式是否与数据相符。在 "完善 "阶段,分析师会考虑权力的作用,将隐性图式转化为显性图式,更新数据,并得出结论。我们的模型有四个重要特征:隐性和显性模式、模式优先且始终、数据作为模式人工制品以及模式多重性。我们比较了图式在过去的感性认识模型中的作用,并基于对不同学术传统中图式的历史回顾,得出了概念上的区别。我们通过四种分析情景验证了我们模型的描述性和规范性能力:注意到未收集的数据、学会处理数据、淡化不方便的数据以及使用传感器进行测量。最后,我们将讨论解释主义的价值、认识论谦逊的美德以及这一感知模型所能促进的多元化。
{"title":"Iceberg Sensemaking: A Process Model for Critical Data Analysis.","authors":"Charles Berret, Tamara Munzner","doi":"10.1109/TVCG.2024.3486613","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3486613","url":null,"abstract":"<p><p>We offer a new model of the sensemaking process for data analysis and visualization. Whereas past sensemaking models have been grounded in positivist assumptions about the nature of knowledge, we reframe data sensemaking in critical, humanistic terms by approaching it through an interpretivist lens. Our three-phase process model uses the analogy of an iceberg, where data is the visible tip of underlying schemas. In the Add phase, the analyst acquires data, incorporates explicit schemas from the data, and absorbs the tacit schemas of both data and people. In the Check phase, the analyst interprets the data with respect to the current schemas and evaluates whether the schemas match the data. In the Refine phase, the analyst considers the role of power, articulates what was tacit into explicitly stated schemas, updates data, and formulates findings. Our model has four important distinguishing features: Tacit and Explicit Schemas, Schemas First and Always, Data as a Schematic Artifact, and Schematic Multiplicity. We compare the roles of schemas in past sensemaking models and draw conceptual distinctions based on a historical review of schemas in different academic traditions. We validate the descriptive and prescriptive power of our model through four analysis scenarios: noticing uncollected data, learning to wrangle data, downplaying inconvenient data, and measuring with sensors. We conclude by discussing the value of interpretivism, the virtue of epistemic humility, and the pluralism this sensemaking model can foster.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142577356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Super-NeRF: View-consistent Detail Generation for NeRF Super-resolution. 超级 NeRF:针对 NeRF 超级分辨率的视图一致性细节生成。
Pub Date : 2024-11-04 DOI: 10.1109/TVCG.2024.3490840
Yuqi Han, Tao Yu, Xiaohang Yu, Di Xu, Binge Zheng, Zonghong Dai, Changpeng Yang, Yuwang Wang, Qionghai Dai

The neural radiance field (NeRF) achieved remarkable success in modeling 3D scenes and synthesizing high-fidelity novel views. However, existing NeRF-based methods focus more on making full use of high-resolution images to generate high-resolution novel views, but less considering the generation of high-resolution details given only low-resolution images. In analogy to the extensive usage of image super-resolution, NeRF super-resolution is an effective way to generate low-resolution-guided high-resolution 3D scenes and holds great potential applications. Up to now, such an important topic is still under-explored. In this paper, we propose a NeRF super-resolution method, named Super-NeRF, to generate high-resolution NeRF from only low-resolution inputs. Given multi-view low-resolution images, Super-NeRF constructs a multi-view consistency-controlling super-resolution module to generate various view-consistent high-resolution details for NeRF. Specifically, an optimizable latent code is introduced for each input view to control the generated reasonable high-resolution 2D images satisfying view consistency. The latent codes of each low-resolution image are optimized synergistically with the target Super-NeRF representation to utilize the view consistency constraint inherent in NeRF construction. We verify the effectiveness of Super-NeRF on synthetic, real-world, and even AI-generated NeRFs. Super-NeRF achieves state-of-the-art NeRF super-resolution performance on high-resolution detail generation and cross-view consistency.

神经辐射场(NeRF)在三维场景建模和合成高保真新视图方面取得了巨大成功。然而,现有的基于 NeRF 的方法更侧重于充分利用高分辨率图像生成高分辨率的新视图,而较少考虑在仅有低分辨率图像的情况下生成高分辨率的细节。与图像超分辨率的广泛应用类似,NeRF 超分辨率是生成由低分辨率引导的高分辨率三维场景的有效方法,具有巨大的应用潜力。迄今为止,这一重要课题仍未得到充分探索。在本文中,我们提出了一种 NeRF 超分辨率方法,命名为 "Super-NeRF",用于仅从低分辨率输入生成高分辨率 NeRF。给定多视角低分辨率图像后,Super-NeRF 构建了一个多视角一致性控制超分辨率模块,为 NeRF 生成各种视角一致的高分辨率细节。具体来说,为每个输入视图引入一个可优化的潜码,以控制生成的合理高分辨率二维图像满足视图一致性。每个低分辨率图像的潜码都与目标超级 NeRF 表示协同优化,以利用 NeRF 构建中固有的视图一致性约束。我们在合成、真实世界甚至人工智能生成的 NeRF 上验证了 Super-NeRF 的有效性。在高分辨率细节生成和跨视图一致性方面,Super-NeRF 实现了最先进的 NeRF 超分辨率性能。
{"title":"Super-NeRF: View-consistent Detail Generation for NeRF Super-resolution.","authors":"Yuqi Han, Tao Yu, Xiaohang Yu, Di Xu, Binge Zheng, Zonghong Dai, Changpeng Yang, Yuwang Wang, Qionghai Dai","doi":"10.1109/TVCG.2024.3490840","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3490840","url":null,"abstract":"<p><p>The neural radiance field (NeRF) achieved remarkable success in modeling 3D scenes and synthesizing high-fidelity novel views. However, existing NeRF-based methods focus more on making full use of high-resolution images to generate high-resolution novel views, but less considering the generation of high-resolution details given only low-resolution images. In analogy to the extensive usage of image super-resolution, NeRF super-resolution is an effective way to generate low-resolution-guided high-resolution 3D scenes and holds great potential applications. Up to now, such an important topic is still under-explored. In this paper, we propose a NeRF super-resolution method, named Super-NeRF, to generate high-resolution NeRF from only low-resolution inputs. Given multi-view low-resolution images, Super-NeRF constructs a multi-view consistency-controlling super-resolution module to generate various view-consistent high-resolution details for NeRF. Specifically, an optimizable latent code is introduced for each input view to control the generated reasonable high-resolution 2D images satisfying view consistency. The latent codes of each low-resolution image are optimized synergistically with the target Super-NeRF representation to utilize the view consistency constraint inherent in NeRF construction. We verify the effectiveness of Super-NeRF on synthetic, real-world, and even AI-generated NeRFs. Super-NeRF achieves state-of-the-art NeRF super-resolution performance on high-resolution detail generation and cross-view consistency.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142574886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CATOM : Causal Topology Map for Spatiotemporal Traffic Analysis with Granger Causality in Urban Areas. CATOM:用于城市地区格兰杰因果关系时空交通分析的因果拓扑图。
Pub Date : 2024-10-31 DOI: 10.1109/TVCG.2024.3489676
Chanyoung Jung, Soobin Yim, Giwoong Park, Simon Oh, Yun Jang

The transportation network is an important element in an urban system that supports daily activities, enabling people to travel from one place to another. One of the key challenges is the network complexity, which is composed of many node pairs distributed over the area. This spatial characteristic results in the high dimensional network problem in understanding the 'cause' of problems such as traffic congestion. Recent studies have proposed visual analytics systems aimed at understanding these underlying causes. Despite these efforts, the analysis of such causes is limited to identified patterns. However, given the intricate distribution of roads and their mutual influence, new patterns continuously emerge across all roads within urban transportation. At this stage, a well-defined visual analytics system can be a good solution for transportation practitioners. In this paper, we propose a system, CATOM (Causal Topology Map), for the cause-effect analysis of traffic patterns based on Granger causality for extracting causal topology maps. CATOM discovers causal relationships between roads through the Granger causality test and quantifies these relationships through the causal density. During the design process, the system was developed to fully utilize spatial information with visualization techniques to overcome the previous problems in the literature. We also evaluate the usability of our approach by conducting a SUS(System Usability Scale) test and traffic cause analysis with the real-world data from two study sites in collaboration with domain experts.

交通网络是城市系统的重要组成部分,它支持人们的日常活动,使人们能够从一个地方前往另一个地方。主要挑战之一是网络的复杂性,它由分布在整个区域的许多节点对组成。这一空间特征导致了在理解交通拥堵等问题的 "成因 "方面存在高维网络问题。最近的研究提出了旨在了解这些根本原因的可视化分析系统。尽管做出了这些努力,但对这些原因的分析仅限于已识别的模式。然而,由于道路分布错综复杂且相互影响,城市交通中的所有道路都会不断出现新的模式。在这个阶段,一个定义明确的可视化分析系统可以为交通从业人员提供一个很好的解决方案。本文提出了一种基于格兰杰因果关系的交通模式因果分析系统 CATOM(因果拓扑图),用于提取因果拓扑图。CATOM 通过格兰杰因果检验发现道路之间的因果关系,并通过因果密度量化这些关系。在设计过程中,系统充分利用了空间信息和可视化技术,克服了以往文献中存在的问题。我们还与领域专家合作,通过对两个研究地点的真实数据进行 SUS(系统可用性量表)测试和交通原因分析,评估了我们方法的可用性。
{"title":"CATOM : Causal Topology Map for Spatiotemporal Traffic Analysis with Granger Causality in Urban Areas.","authors":"Chanyoung Jung, Soobin Yim, Giwoong Park, Simon Oh, Yun Jang","doi":"10.1109/TVCG.2024.3489676","DOIUrl":"10.1109/TVCG.2024.3489676","url":null,"abstract":"<p><p>The transportation network is an important element in an urban system that supports daily activities, enabling people to travel from one place to another. One of the key challenges is the network complexity, which is composed of many node pairs distributed over the area. This spatial characteristic results in the high dimensional network problem in understanding the 'cause' of problems such as traffic congestion. Recent studies have proposed visual analytics systems aimed at understanding these underlying causes. Despite these efforts, the analysis of such causes is limited to identified patterns. However, given the intricate distribution of roads and their mutual influence, new patterns continuously emerge across all roads within urban transportation. At this stage, a well-defined visual analytics system can be a good solution for transportation practitioners. In this paper, we propose a system, CATOM (Causal Topology Map), for the cause-effect analysis of traffic patterns based on Granger causality for extracting causal topology maps. CATOM discovers causal relationships between roads through the Granger causality test and quantifies these relationships through the causal density. During the design process, the system was developed to fully utilize spatial information with visualization techniques to overcome the previous problems in the literature. We also evaluate the usability of our approach by conducting a SUS(System Usability Scale) test and traffic cause analysis with the real-world data from two study sites in collaboration with domain experts.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142559854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1