首页 > 最新文献

2022 IEEE Visualization and Visual Analytics (VIS)最新文献

英文 中文
ARShopping: In-Store Shopping Decision Support Through Augmented Reality and Immersive Visualization ARShopping:通过增强现实和沉浸式可视化的店内购物决策支持
Pub Date : 2022-07-15 DOI: 10.1109/VIS54862.2022.00033
Bingjie Xu, Shunan Guo, E. Koh, J. Hoffswell, R. Rossi, F. Du
Online shopping gives customers boundless options to choose from, backed by extensive product details and customer reviews, all from the comfort of home; yet, no amount of detailed, online information can outweigh the instant gratification and hands-on understanding of a product that is provided by physical stores. However, making purchasing decisions in physical stores can be challenging due to a large number of similar alternatives and limited accessibility of the relevant product information (e.g., features, ratings, and reviews). In this work, we present ARShopping: a web-based prototype to visually communicate detailed product information from an online setting on portable smart devices (e.g., phones, tablets, glasses), within the physical space at the point of purchase. This prototype uses augmented reality (AR) to identify products and display detailed information to help consumers make purchasing decisions that fulfill their needs while decreasing the decision-making time. In particular, we use a data fusion algorithm to improve the precision of the product detection; we then integrate AR visualizations into the scene to facilitate comparisons across multiple products and features. We designed our prototype based on interviews with 14 participants to better understand the utility and ease of use of the prototype.
网上购物为顾客提供了无限的选择,有广泛的产品细节和顾客评论作为后盾,一切都在家里舒适;然而,再多详细的在线信息也无法抵消实体店提供的即时满足感和对产品的实际理解。然而,在实体店中做出购买决定可能是具有挑战性的,因为有大量类似的替代品,并且相关产品信息(例如,功能、评级和评论)的可访问性有限。在这项工作中,我们提出了ARShopping:一个基于网络的原型,可以在购买点的物理空间内,从便携式智能设备(例如,手机,平板电脑,眼镜)的在线设置中直观地传达详细的产品信息。该原型使用增强现实(AR)来识别产品并显示详细信息,以帮助消费者做出满足其需求的购买决策,同时减少决策时间。特别地,我们使用数据融合算法来提高产品检测的精度;然后,我们将AR可视化集成到场景中,以方便跨多个产品和功能的比较。为了更好地理解原型的实用性和易用性,我们基于对14名参与者的采访设计了原型。
{"title":"ARShopping: In-Store Shopping Decision Support Through Augmented Reality and Immersive Visualization","authors":"Bingjie Xu, Shunan Guo, E. Koh, J. Hoffswell, R. Rossi, F. Du","doi":"10.1109/VIS54862.2022.00033","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00033","url":null,"abstract":"Online shopping gives customers boundless options to choose from, backed by extensive product details and customer reviews, all from the comfort of home; yet, no amount of detailed, online information can outweigh the instant gratification and hands-on understanding of a product that is provided by physical stores. However, making purchasing decisions in physical stores can be challenging due to a large number of similar alternatives and limited accessibility of the relevant product information (e.g., features, ratings, and reviews). In this work, we present ARShopping: a web-based prototype to visually communicate detailed product information from an online setting on portable smart devices (e.g., phones, tablets, glasses), within the physical space at the point of purchase. This prototype uses augmented reality (AR) to identify products and display detailed information to help consumers make purchasing decisions that fulfill their needs while decreasing the decision-making time. In particular, we use a data fusion algorithm to improve the precision of the product detection; we then integrate AR visualizations into the scene to facilitate comparisons across multiple products and features. We designed our prototype based on interviews with 14 participants to better understand the utility and ease of use of the prototype.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130205714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
LineCap: Line Charts for Data Visualization Captioning Models 用于数据可视化字幕模型的折线图
Pub Date : 2022-07-15 DOI: 10.1109/VIS54862.2022.00016
Anita Mahinpei, Zona Kostic, Christy Tanner
Data visualization captions help readers understand the purpose of a visualization and are crucial for individuals with visual impairments. The prevalence of poor figure captions and the successful application of deep learning approaches to image captioning motivate the use of similar techniques for automated figure captioning. However, research in this field has been stunted by the lack of suitable datasets. We introduce LineCap, a novel figure captioning dataset of 3,528 figures, and we provide insights from curating this dataset and using end-to-end deep learning models for automated figure captioning.
数据可视化标题帮助读者理解可视化的目的,对有视觉障碍的人来说是至关重要的。糟糕的图片字幕的普遍存在以及深度学习方法在图像字幕中的成功应用激发了类似技术在自动图片字幕中的使用。然而,由于缺乏合适的数据集,这一领域的研究一直受到阻碍。我们介绍了LineCap,一个包含3,528个图形的新型图形字幕数据集,我们提供了从管理该数据集和使用端到端深度学习模型进行自动图形字幕的见解。
{"title":"LineCap: Line Charts for Data Visualization Captioning Models","authors":"Anita Mahinpei, Zona Kostic, Christy Tanner","doi":"10.1109/VIS54862.2022.00016","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00016","url":null,"abstract":"Data visualization captions help readers understand the purpose of a visualization and are crucial for individuals with visual impairments. The prevalence of poor figure captions and the successful application of deep learning approaches to image captioning motivate the use of similar techniques for automated figure captioning. However, research in this field has been stunted by the lack of suitable datasets. We introduce LineCap, a novel figure captioning dataset of 3,528 figures, and we provide insights from curating this dataset and using end-to-end deep learning models for automated figure captioning.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126638150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
FairFuse: Interactive Visual Support for Fair Consensus Ranking FairFuse:公平共识排名的交互式视觉支持
Pub Date : 2022-07-15 DOI: 10.1109/VIS54862.2022.00022
Hilson Shrestha, Kathleen Cachel, Mallak Alkhathlan, Elke A. Rundensteiner, Lane Harrison
Fair consensus building combines the preferences of multiple rankers into a single consensus ranking, while ensuring any group defined by a protected attribute (such as race or gender) is not disadvantaged compared to other groups. Manually generating a fair consensus ranking is time-consuming and impractical- even for a fairly small number of candidates. While algorithmic approaches for auditing and generating fair consensus rankings have been developed, these have not been operationalized in interactive systems. To bridge this gap, we introduce FairFuse, a visualization system for generating, analyzing, and auditing fair consensus rankings. We construct a data model which includes base rankings entered by rankers, augmented with measures of group fairness, and algorithms for generating consensus rankings with varying degrees of fairness. We design novel visualizations that encode these measures in a parallel-coordinates style rank visualization, with interactions for generating and exploring fair consensus rankings. We describe use cases in which FairFuse supports a decision-maker in ranking scenarios in which fairness is important, and discuss emerging challenges for future efforts supporting fairness-oriented rank analysis. Code and demo videos available at https://osf.io/hd639/.
公平的共识构建将多个排名者的偏好组合成一个单一的共识排名,同时确保由受保护属性(如种族或性别)定义的任何群体与其他群体相比都不会处于劣势。手动生成一个公平的共识排名既耗时又不切实际——即使对于相当少的候选人也是如此。虽然已经开发了审计和产生公平共识排名的算法方法,但这些方法尚未在交互式系统中实施。为了弥合这一差距,我们引入了FairFuse,一个用于生成、分析和审计公平共识排名的可视化系统。我们构建了一个数据模型,其中包括排名者输入的基本排名,增加了群体公平的措施,以及生成具有不同程度公平的共识排名的算法。我们设计了新颖的可视化,将这些措施编码为平行坐标风格的排名可视化,并通过交互生成和探索公平的共识排名。我们描述了FairFuse在公平性很重要的排名场景中支持决策者的用例,并讨论了未来支持以公平性为导向的排名分析所面临的新挑战。代码和演示视频可在https://osf.io/hd639/获得。
{"title":"FairFuse: Interactive Visual Support for Fair Consensus Ranking","authors":"Hilson Shrestha, Kathleen Cachel, Mallak Alkhathlan, Elke A. Rundensteiner, Lane Harrison","doi":"10.1109/VIS54862.2022.00022","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00022","url":null,"abstract":"Fair consensus building combines the preferences of multiple rankers into a single consensus ranking, while ensuring any group defined by a protected attribute (such as race or gender) is not disadvantaged compared to other groups. Manually generating a fair consensus ranking is time-consuming and impractical- even for a fairly small number of candidates. While algorithmic approaches for auditing and generating fair consensus rankings have been developed, these have not been operationalized in interactive systems. To bridge this gap, we introduce FairFuse, a visualization system for generating, analyzing, and auditing fair consensus rankings. We construct a data model which includes base rankings entered by rankers, augmented with measures of group fairness, and algorithms for generating consensus rankings with varying degrees of fairness. We design novel visualizations that encode these measures in a parallel-coordinates style rank visualization, with interactions for generating and exploring fair consensus rankings. We describe use cases in which FairFuse supports a decision-maker in ranking scenarios in which fairness is important, and discuss emerging challenges for future efforts supporting fairness-oriented rank analysis. Code and demo videos available at https://osf.io/hd639/.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126783619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Efficient Interpolation-based Pathline Tracing with B-spline Curves in Particle Dataset 基于插值的粒子b样条曲线路径跟踪
Pub Date : 2022-07-14 DOI: 10.1109/VIS54862.2022.00037
Haoyu Li, Tianyu Xiong, Han-Wei Shen
Particle tracing through numerical integration is a well-known approach to generating pathlines for visualization. However, for particle simulations, the computation of pathlines is expensive, since the interpolation method is complicated due to the lack of connectivity information. Previous studies utilize the k-d tree to reduce the time for neighborhood search. However, the efficiency is still limited by the number of tracing time steps. Therefore, we propose a novel interpolation-based particle tracing method that first represents particle data as B-spline curves and interpolates B-spline control points to reduce the number of interpolation time steps. We demonstrate our approach achieves good tracing accuracy with much less computation time.
通过数值积分的粒子跟踪是一种众所周知的生成可视化路径的方法。然而,对于粒子模拟,路径的计算是昂贵的,因为插值方法由于缺乏连接信息而变得复杂。以前的研究利用k-d树来减少邻域搜索的时间。然而,效率仍然受到跟踪时间步长的限制。因此,我们提出了一种新的基于插值的粒子跟踪方法,该方法首先将粒子数据表示为b样条曲线,然后插值b样条控制点以减少插值时间步数。结果表明,该方法在较短的计算时间内实现了较好的跟踪精度。
{"title":"Efficient Interpolation-based Pathline Tracing with B-spline Curves in Particle Dataset","authors":"Haoyu Li, Tianyu Xiong, Han-Wei Shen","doi":"10.1109/VIS54862.2022.00037","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00037","url":null,"abstract":"Particle tracing through numerical integration is a well-known approach to generating pathlines for visualization. However, for particle simulations, the computation of pathlines is expensive, since the interpolation method is complicated due to the lack of connectivity information. Previous studies utilize the k-d tree to reduce the time for neighborhood search. However, the efficiency is still limited by the number of tracing time steps. Therefore, we propose a novel interpolation-based particle tracing method that first represents particle data as B-spline curves and interpolates B-spline control points to reduce the number of interpolation time steps. We demonstrate our approach achieves good tracing accuracy with much less computation time.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121034096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Visualizing Confidence Intervals for Critical Point Probabilities in 2D Scalar Field Ensembles 二维标量场集合中临界点概率置信区间的可视化
Pub Date : 2022-07-13 DOI: 10.1109/VIS54862.2022.00038
Dominik Vietinghoff, M. Böttinger, G. Scheuermann, Christian Heine
An important task in visualization is the extraction and highlighting of dominant features in data to support users in their analysis process. Topological methods are a well-known means of identifying such features in deterministic fields. However, many real-world phenom-ena studied today are the result of a chaotic system that cannot be fully described by a single simulation. Instead, the variability of such systems is usually captured with ensemble simulations that pro-duce a variety of possible outcomes of the simulated process. The topological analysis of such ensemble data sets and uncertain data, in general, is less well studied. In this work, we present an approach for the computation and visual representation of confidence intervals for the occurrence probabilities of critical points in ensemble data sets. We demonstrate the added value of our approach over existing methods for critical point prediction in uncertain data on a synthetic data set and show its applicability to a data set from climate research.
可视化的一项重要任务是提取和突出显示数据中的主要特征,以支持用户的分析过程。拓扑方法是确定领域中识别此类特征的一种众所周知的方法。然而,当今研究的许多现实世界现象都是混沌系统的结果,单次模拟无法完全描述。相反,这些系统的可变性通常是通过产生模拟过程的各种可能结果的集成模拟来捕获的。一般来说,这种集成数据集和不确定数据的拓扑分析研究较少。在这项工作中,我们提出了一种计算和可视化表示集成数据集中临界点发生概率置信区间的方法。我们展示了我们的方法在不确定数据中对合成数据集进行临界点预测的现有方法的附加价值,并展示了其对气候研究数据集的适用性。
{"title":"Visualizing Confidence Intervals for Critical Point Probabilities in 2D Scalar Field Ensembles","authors":"Dominik Vietinghoff, M. Böttinger, G. Scheuermann, Christian Heine","doi":"10.1109/VIS54862.2022.00038","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00038","url":null,"abstract":"An important task in visualization is the extraction and highlighting of dominant features in data to support users in their analysis process. Topological methods are a well-known means of identifying such features in deterministic fields. However, many real-world phenom-ena studied today are the result of a chaotic system that cannot be fully described by a single simulation. Instead, the variability of such systems is usually captured with ensemble simulations that pro-duce a variety of possible outcomes of the simulated process. The topological analysis of such ensemble data sets and uncertain data, in general, is less well studied. In this work, we present an approach for the computation and visual representation of confidence intervals for the occurrence probabilities of critical points in ensemble data sets. We demonstrate the added value of our approach over existing methods for critical point prediction in uncertain data on a synthetic data set and show its applicability to a data set from climate research.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125976075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Color Coding of Large Value Ranges Applied to Meteorological Data 大数值范围彩色编码在气象数据中的应用
Pub Date : 2022-07-13 DOI: 10.1109/VIS54862.2022.00034
Daniel Braun, K. Ebell, V. Schemann, L. Pelchmann, S. Crewell, R. Borgo, T. V. Landesberger
This paper presents a novel color scheme designed to address the challenge of visualizing data series with large value ranges, where scale transformation provides limited support. We focus on meteo-rological data, where the presence of large value ranges is common. We apply our approach to meteorological scatterplots, as one of the most common plots used in this domain area. Our approach leverages the numerical representation of mantissa and exponent of the values to guide the design of novel “nested” color schemes, able to emphasize differences between magnitudes. Our user study evaluates the new designs, the state of the art color scales and rep-resentative color schemes used in the analysis of meteorological data: ColorCrafter, Viridis, and Rainbow. We assess accuracy, time and confidence in the context of discrimination (comparison) and interpretation (reading) tasks. Our proposed color scheme signifi-cantly outperforms the others in interpretation tasks, while showing comparable performances in discrimination tasks.
本文提出了一种新的配色方案,旨在解决具有大值范围的可视化数据序列的挑战,其中尺度转换提供有限的支持。我们关注的是气象数据,其中存在较大的值范围是常见的。我们将我们的方法应用于气象散点图,这是该领域最常用的图之一。我们的方法利用尾数和指数的数值表示来指导新颖的“嵌套”配色方案的设计,能够强调数量级之间的差异。我们的用户研究评估了气象数据分析中使用的新设计、最先进的色阶和代表性配色方案:ColorCrafter、Viridis和Rainbow。我们在辨别(比较)和解释(阅读)任务的背景下评估准确性、时间和信心。我们提出的配色方案在解释任务中显著优于其他配色方案,而在辨别任务中表现相当。
{"title":"Color Coding of Large Value Ranges Applied to Meteorological Data","authors":"Daniel Braun, K. Ebell, V. Schemann, L. Pelchmann, S. Crewell, R. Borgo, T. V. Landesberger","doi":"10.1109/VIS54862.2022.00034","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00034","url":null,"abstract":"This paper presents a novel color scheme designed to address the challenge of visualizing data series with large value ranges, where scale transformation provides limited support. We focus on meteo-rological data, where the presence of large value ranges is common. We apply our approach to meteorological scatterplots, as one of the most common plots used in this domain area. Our approach leverages the numerical representation of mantissa and exponent of the values to guide the design of novel “nested” color schemes, able to emphasize differences between magnitudes. Our user study evaluates the new designs, the state of the art color scales and rep-resentative color schemes used in the analysis of meteorological data: ColorCrafter, Viridis, and Rainbow. We assess accuracy, time and confidence in the context of discrimination (comparison) and interpretation (reading) tasks. Our proposed color scheme signifi-cantly outperforms the others in interpretation tasks, while showing comparable performances in discrimination tasks.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115810707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ASEVis: Visual Exploration of Active System Ensembles to Define Characteristic Measures ASEVis:主动系统集成的可视化探索,以定义特征度量
Pub Date : 2022-07-13 DOI: 10.1109/VIS54862.2022.00039
Marina Evers, R. Wittkowski, L. Linsen
Simulation ensembles are a common tool in physics for understanding how a model outcome depends on input parameters. We analyze an active particle system, where each particle can use energy from its surroundings to propel itself. A multi-dimensional feature vector containing all particles' motion information can describe the whole system at each time step. The system's behavior strongly depends on input parameters like the propulsion mechanism of the particles. To understand how the time-varying behavior depends on the input parameters, it is necessary to introduce new measures to quantify the difference of the dynamics of the ensemble members. We propose a tool that supports the interactive visual analysis of time-varying feature-vector ensembles. A core component of our tool allows for the interactive definition and refinement of new measures that can then be used to understand the system's behavior and compare the ensemble members. Different visualizations support the user in finding a characteristic measure for the system. By visualizing the user-defined measure, the user can then investigate the parameter dependencies and gain insights into the relationship between input parameters and simulation output.
仿真集成是物理学中用于理解模型结果如何依赖于输入参数的常用工具。我们分析了一个活跃的粒子系统,其中每个粒子都可以利用周围环境的能量来推动自己。一个包含所有粒子运动信息的多维特征向量可以在每个时间步描述整个系统。系统的行为很大程度上取决于输入参数,比如粒子的推进机制。为了理解时变行为如何依赖于输入参数,有必要引入新的度量来量化集成成员的动力学差异。我们提出了一个工具,支持时变特征向量集合的交互式可视化分析。我们工具的一个核心组件允许对新的度量进行交互式定义和细化,然后可以使用这些度量来理解系统的行为并比较集成成员。不同的可视化支持用户找到系统的特征度量。通过可视化用户定义的度量,用户可以研究参数依赖性,并深入了解输入参数和仿真输出之间的关系。
{"title":"ASEVis: Visual Exploration of Active System Ensembles to Define Characteristic Measures","authors":"Marina Evers, R. Wittkowski, L. Linsen","doi":"10.1109/VIS54862.2022.00039","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00039","url":null,"abstract":"Simulation ensembles are a common tool in physics for understanding how a model outcome depends on input parameters. We analyze an active particle system, where each particle can use energy from its surroundings to propel itself. A multi-dimensional feature vector containing all particles' motion information can describe the whole system at each time step. The system's behavior strongly depends on input parameters like the propulsion mechanism of the particles. To understand how the time-varying behavior depends on the input parameters, it is necessary to introduce new measures to quantify the difference of the dynamics of the ensemble members. We propose a tool that supports the interactive visual analysis of time-varying feature-vector ensembles. A core component of our tool allows for the interactive definition and refinement of new measures that can then be used to understand the system's behavior and compare the ensemble members. Different visualizations support the user in finding a characteristic measure for the system. By visualizing the user-defined measure, the user can then investigate the parameter dependencies and gain insights into the relationship between input parameters and simulation output.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116064890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Streamlining Visualization Authoring in D3 Through User-Driven Templates 通过用户驱动的模板简化D3中的可视化创作
Pub Date : 2022-07-13 DOI: 10.1109/VIS54862.2022.00012
Hannah K. Bako, Alisha Varma, Anuoluwapo Faboro, Mahreen Haider, Favour Nerrise, B. Kenah, L. Battle
D3 is arguably the most popular tool for implementing web-based visualizations. Yet D3 has a steep learning curve that may hinder its adoption and continued use. To simplify the process of programming D3 visualizations, we must first understand the space of implementation practices that D3 users engage in. We present a qualitative analysis of 2500 D3 visualizations and their corresponding imple-mentations. We find that 5 visualization types (Bar Charts, Geomaps, Line Charts, Scatterplots, and Force Directed Graphs) account for 80% of D3 visualizations found in our corpus. While implementation styles vary slightly across designs, the underlying code structure for all visualization types remains the same; presenting an opportunity for code reuse. Using our corpus of D3 examples, we synthesize reusable code templates for eight popular D3 visualization types and share them in our open source repository. Based on our results, we discuss design considerations for leveraging users' implementation patterns to reduce visualization design effort through design templates and auto-generated code recommendations.
D3可以说是实现基于web的可视化的最流行的工具。然而D3有一个陡峭的学习曲线,这可能会阻碍它的采用和持续使用。为了简化编程D3可视化的过程,我们必须首先了解D3用户参与的实现实践空间。我们对2500个D3可视化及其相应的实现进行了定性分析。我们发现5种可视化类型(条形图,Geomaps,折线图,散点图和力有向图)占我们语料库中D3可视化的80%。虽然不同设计的实现风格略有不同,但所有可视化类型的底层代码结构都是相同的;提供代码重用的机会。使用我们的D3示例语料库,我们为八种流行的D3可视化类型合成了可重用的代码模板,并在我们的开源存储库中共享它们。基于我们的结果,我们讨论了利用用户实现模式的设计注意事项,通过设计模板和自动生成的代码建议来减少可视化设计的工作量。
{"title":"Streamlining Visualization Authoring in D3 Through User-Driven Templates","authors":"Hannah K. Bako, Alisha Varma, Anuoluwapo Faboro, Mahreen Haider, Favour Nerrise, B. Kenah, L. Battle","doi":"10.1109/VIS54862.2022.00012","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00012","url":null,"abstract":"D3 is arguably the most popular tool for implementing web-based visualizations. Yet D3 has a steep learning curve that may hinder its adoption and continued use. To simplify the process of programming D3 visualizations, we must first understand the space of implementation practices that D3 users engage in. We present a qualitative analysis of 2500 D3 visualizations and their corresponding imple-mentations. We find that 5 visualization types (Bar Charts, Geomaps, Line Charts, Scatterplots, and Force Directed Graphs) account for 80% of D3 visualizations found in our corpus. While implementation styles vary slightly across designs, the underlying code structure for all visualization types remains the same; presenting an opportunity for code reuse. Using our corpus of D3 examples, we synthesize reusable code templates for eight popular D3 visualization types and share them in our open source repository. Based on our results, we discuss design considerations for leveraging users' implementation patterns to reduce visualization design effort through design templates and auto-generated code recommendations.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131159908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Who benefits from Visualization Adaptations? Towards a better Understanding of the Influence of Visualization Literacy 谁从可视化适应中受益?更好地理解可视化素养的影响
Pub Date : 2022-07-12 DOI: 10.1109/VIS54862.2022.00027
Marc Satkowski, F. Kessler, S. Narciss, Raimund Dachselt
The ability to read, understand, and comprehend visual information representations is subsumed under the term visualization literacy (VL). One possibility to improve the use of information visualizations is to introduce adaptations. However, it is yet unclear whether people with different VL benefit from adaptations to the same degree. We conducted an online experiment (n = 42) to investigate whether the effect of an adaptation (here: De-Emphasis) of visualizations (bar charts, scatter plots) on performance (accuracy, time) and user experiences depends on users' VL level. Using linear mixed models for the analyses, we found a positive impact of the De-Emphasis adaptation across all conditions, as well as an interaction effect of adaptation and VL on the task completion time for bar charts. This work contributes to a better understanding of the intertwined relationship of VL and visual adaptations and motivates future research.
阅读、理解和理解视觉信息表示的能力被归入术语可视化素养(VL)。改进信息可视化使用的一种可能性是引入适应性。然而,目前尚不清楚VL不同的人是否从适应中受益程度相同。我们进行了一项在线实验(n = 42),以调查可视化(条形图,散点图)对性能(准确性,时间)和用户体验的适应(这里:去强调)是否取决于用户的VL水平。使用线性混合模型进行分析,我们发现在所有条件下,去强调适应都有积极的影响,并且适应和VL对柱状图任务完成时间有交互作用。这项工作有助于更好地理解VL和视觉适应的相互交织的关系,并激励未来的研究。
{"title":"Who benefits from Visualization Adaptations? Towards a better Understanding of the Influence of Visualization Literacy","authors":"Marc Satkowski, F. Kessler, S. Narciss, Raimund Dachselt","doi":"10.1109/VIS54862.2022.00027","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00027","url":null,"abstract":"The ability to read, understand, and comprehend visual information representations is subsumed under the term visualization literacy (VL). One possibility to improve the use of information visualizations is to introduce adaptations. However, it is yet unclear whether people with different VL benefit from adaptations to the same degree. We conducted an online experiment (n = 42) to investigate whether the effect of an adaptation (here: De-Emphasis) of visualizations (bar charts, scatter plots) on performance (accuracy, time) and user experiences depends on users' VL level. Using linear mixed models for the analyses, we found a positive impact of the De-Emphasis adaptation across all conditions, as well as an interaction effect of adaptation and VL on the task completion time for bar charts. This work contributes to a better understanding of the intertwined relationship of VL and visual adaptations and motivates future research.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127380681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Facilitating Conversational Interaction in Natural Language Interfaces for Visualization 促进可视化自然语言界面中的会话交互
Pub Date : 2022-07-01 DOI: 10.1109/VIS54862.2022.00010
Rishab Mitra, Arpit Narechania, A. Endert, J. Stasko
Natural language (NL) toolkits enable visualization developers, who may not have a background in natural language processing (NLP), to create natural language interfaces (NLIs) for end-users to flexibly specify and interact with visualizations. However, these toolkits currently only support one-off utterances, with minimal capability to facilitate a multi-turn dialog between the user and the system. Developing NLIs with such conversational interaction capabilities remains a challenging task, requiring implementations of low-level NLP techniques to process a new query as an intent to follow-up on an older query. We extend an existing Python-based toolkit, NL4DV, that processes an NL query about a tabular dataset and returns an analytic specification containing data attributes, analytic tasks, and relevant visualizations, modeled as a JSON object. Specifically, NL4DV now enables developers to facilitate multiple simultaneous conversations about a dataset and resolve associated ambiguities, augmenting new conversational information into the output JSON object. We demonstrate these capabilities through three examples: (1) an NLI to learn aspects of the Vega-Lite grammar, (2) a mind mapping application to create free-flowing conversations, and (3) a chatbot to answer questions and resolve ambiguities.
自然语言(NL)工具包使没有自然语言处理(NLP)背景的可视化开发人员能够为最终用户创建自然语言接口(nli),以灵活地指定可视化并与之交互。然而,这些工具包目前只支持一次性的话语,很少有能力促进用户和系统之间的多回合对话。开发具有这种会话交互功能的nli仍然是一项具有挑战性的任务,需要实现低级NLP技术来处理新查询,作为后续旧查询的意图。我们扩展了现有的基于python的工具包NL4DV,该工具包处理关于表格数据集的NL查询,并返回包含数据属性、分析任务和相关可视化的分析规范,并将其建模为JSON对象。具体来说,NL4DV现在使开发人员能够促进关于数据集的多个同时对话,并解决相关的歧义,将新的对话信息增加到输出JSON对象中。我们通过三个例子来展示这些功能:(1)一个NLI来学习Vega-Lite语法的各个方面,(2)一个思维导图应用程序来创建自由流畅的对话,(3)一个聊天机器人来回答问题和解决歧义。
{"title":"Facilitating Conversational Interaction in Natural Language Interfaces for Visualization","authors":"Rishab Mitra, Arpit Narechania, A. Endert, J. Stasko","doi":"10.1109/VIS54862.2022.00010","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00010","url":null,"abstract":"Natural language (NL) toolkits enable visualization developers, who may not have a background in natural language processing (NLP), to create natural language interfaces (NLIs) for end-users to flexibly specify and interact with visualizations. However, these toolkits currently only support one-off utterances, with minimal capability to facilitate a multi-turn dialog between the user and the system. Developing NLIs with such conversational interaction capabilities remains a challenging task, requiring implementations of low-level NLP techniques to process a new query as an intent to follow-up on an older query. We extend an existing Python-based toolkit, NL4DV, that processes an NL query about a tabular dataset and returns an analytic specification containing data attributes, analytic tasks, and relevant visualizations, modeled as a JSON object. Specifically, NL4DV now enables developers to facilitate multiple simultaneous conversations about a dataset and resolve associated ambiguities, augmenting new conversational information into the output JSON object. We demonstrate these capabilities through three examples: (1) an NLI to learn aspects of the Vega-Lite grammar, (2) a mind mapping application to create free-flowing conversations, and (3) a chatbot to answer questions and resolve ambiguities.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128094604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
2022 IEEE Visualization and Visual Analytics (VIS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1