Pub Date : 2022-07-15DOI: 10.1109/VIS54862.2022.00033
Bingjie Xu, Shunan Guo, E. Koh, J. Hoffswell, R. Rossi, F. Du
Online shopping gives customers boundless options to choose from, backed by extensive product details and customer reviews, all from the comfort of home; yet, no amount of detailed, online information can outweigh the instant gratification and hands-on understanding of a product that is provided by physical stores. However, making purchasing decisions in physical stores can be challenging due to a large number of similar alternatives and limited accessibility of the relevant product information (e.g., features, ratings, and reviews). In this work, we present ARShopping: a web-based prototype to visually communicate detailed product information from an online setting on portable smart devices (e.g., phones, tablets, glasses), within the physical space at the point of purchase. This prototype uses augmented reality (AR) to identify products and display detailed information to help consumers make purchasing decisions that fulfill their needs while decreasing the decision-making time. In particular, we use a data fusion algorithm to improve the precision of the product detection; we then integrate AR visualizations into the scene to facilitate comparisons across multiple products and features. We designed our prototype based on interviews with 14 participants to better understand the utility and ease of use of the prototype.
{"title":"ARShopping: In-Store Shopping Decision Support Through Augmented Reality and Immersive Visualization","authors":"Bingjie Xu, Shunan Guo, E. Koh, J. Hoffswell, R. Rossi, F. Du","doi":"10.1109/VIS54862.2022.00033","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00033","url":null,"abstract":"Online shopping gives customers boundless options to choose from, backed by extensive product details and customer reviews, all from the comfort of home; yet, no amount of detailed, online information can outweigh the instant gratification and hands-on understanding of a product that is provided by physical stores. However, making purchasing decisions in physical stores can be challenging due to a large number of similar alternatives and limited accessibility of the relevant product information (e.g., features, ratings, and reviews). In this work, we present ARShopping: a web-based prototype to visually communicate detailed product information from an online setting on portable smart devices (e.g., phones, tablets, glasses), within the physical space at the point of purchase. This prototype uses augmented reality (AR) to identify products and display detailed information to help consumers make purchasing decisions that fulfill their needs while decreasing the decision-making time. In particular, we use a data fusion algorithm to improve the precision of the product detection; we then integrate AR visualizations into the scene to facilitate comparisons across multiple products and features. We designed our prototype based on interviews with 14 participants to better understand the utility and ease of use of the prototype.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130205714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-15DOI: 10.1109/VIS54862.2022.00016
Anita Mahinpei, Zona Kostic, Christy Tanner
Data visualization captions help readers understand the purpose of a visualization and are crucial for individuals with visual impairments. The prevalence of poor figure captions and the successful application of deep learning approaches to image captioning motivate the use of similar techniques for automated figure captioning. However, research in this field has been stunted by the lack of suitable datasets. We introduce LineCap, a novel figure captioning dataset of 3,528 figures, and we provide insights from curating this dataset and using end-to-end deep learning models for automated figure captioning.
{"title":"LineCap: Line Charts for Data Visualization Captioning Models","authors":"Anita Mahinpei, Zona Kostic, Christy Tanner","doi":"10.1109/VIS54862.2022.00016","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00016","url":null,"abstract":"Data visualization captions help readers understand the purpose of a visualization and are crucial for individuals with visual impairments. The prevalence of poor figure captions and the successful application of deep learning approaches to image captioning motivate the use of similar techniques for automated figure captioning. However, research in this field has been stunted by the lack of suitable datasets. We introduce LineCap, a novel figure captioning dataset of 3,528 figures, and we provide insights from curating this dataset and using end-to-end deep learning models for automated figure captioning.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126638150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-15DOI: 10.1109/VIS54862.2022.00022
Hilson Shrestha, Kathleen Cachel, Mallak Alkhathlan, Elke A. Rundensteiner, Lane Harrison
Fair consensus building combines the preferences of multiple rankers into a single consensus ranking, while ensuring any group defined by a protected attribute (such as race or gender) is not disadvantaged compared to other groups. Manually generating a fair consensus ranking is time-consuming and impractical- even for a fairly small number of candidates. While algorithmic approaches for auditing and generating fair consensus rankings have been developed, these have not been operationalized in interactive systems. To bridge this gap, we introduce FairFuse, a visualization system for generating, analyzing, and auditing fair consensus rankings. We construct a data model which includes base rankings entered by rankers, augmented with measures of group fairness, and algorithms for generating consensus rankings with varying degrees of fairness. We design novel visualizations that encode these measures in a parallel-coordinates style rank visualization, with interactions for generating and exploring fair consensus rankings. We describe use cases in which FairFuse supports a decision-maker in ranking scenarios in which fairness is important, and discuss emerging challenges for future efforts supporting fairness-oriented rank analysis. Code and demo videos available at https://osf.io/hd639/.
{"title":"FairFuse: Interactive Visual Support for Fair Consensus Ranking","authors":"Hilson Shrestha, Kathleen Cachel, Mallak Alkhathlan, Elke A. Rundensteiner, Lane Harrison","doi":"10.1109/VIS54862.2022.00022","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00022","url":null,"abstract":"Fair consensus building combines the preferences of multiple rankers into a single consensus ranking, while ensuring any group defined by a protected attribute (such as race or gender) is not disadvantaged compared to other groups. Manually generating a fair consensus ranking is time-consuming and impractical- even for a fairly small number of candidates. While algorithmic approaches for auditing and generating fair consensus rankings have been developed, these have not been operationalized in interactive systems. To bridge this gap, we introduce FairFuse, a visualization system for generating, analyzing, and auditing fair consensus rankings. We construct a data model which includes base rankings entered by rankers, augmented with measures of group fairness, and algorithms for generating consensus rankings with varying degrees of fairness. We design novel visualizations that encode these measures in a parallel-coordinates style rank visualization, with interactions for generating and exploring fair consensus rankings. We describe use cases in which FairFuse supports a decision-maker in ranking scenarios in which fairness is important, and discuss emerging challenges for future efforts supporting fairness-oriented rank analysis. Code and demo videos available at https://osf.io/hd639/.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126783619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-14DOI: 10.1109/VIS54862.2022.00037
Haoyu Li, Tianyu Xiong, Han-Wei Shen
Particle tracing through numerical integration is a well-known approach to generating pathlines for visualization. However, for particle simulations, the computation of pathlines is expensive, since the interpolation method is complicated due to the lack of connectivity information. Previous studies utilize the k-d tree to reduce the time for neighborhood search. However, the efficiency is still limited by the number of tracing time steps. Therefore, we propose a novel interpolation-based particle tracing method that first represents particle data as B-spline curves and interpolates B-spline control points to reduce the number of interpolation time steps. We demonstrate our approach achieves good tracing accuracy with much less computation time.
{"title":"Efficient Interpolation-based Pathline Tracing with B-spline Curves in Particle Dataset","authors":"Haoyu Li, Tianyu Xiong, Han-Wei Shen","doi":"10.1109/VIS54862.2022.00037","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00037","url":null,"abstract":"Particle tracing through numerical integration is a well-known approach to generating pathlines for visualization. However, for particle simulations, the computation of pathlines is expensive, since the interpolation method is complicated due to the lack of connectivity information. Previous studies utilize the k-d tree to reduce the time for neighborhood search. However, the efficiency is still limited by the number of tracing time steps. Therefore, we propose a novel interpolation-based particle tracing method that first represents particle data as B-spline curves and interpolates B-spline control points to reduce the number of interpolation time steps. We demonstrate our approach achieves good tracing accuracy with much less computation time.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121034096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-13DOI: 10.1109/VIS54862.2022.00038
Dominik Vietinghoff, M. Böttinger, G. Scheuermann, Christian Heine
An important task in visualization is the extraction and highlighting of dominant features in data to support users in their analysis process. Topological methods are a well-known means of identifying such features in deterministic fields. However, many real-world phenom-ena studied today are the result of a chaotic system that cannot be fully described by a single simulation. Instead, the variability of such systems is usually captured with ensemble simulations that pro-duce a variety of possible outcomes of the simulated process. The topological analysis of such ensemble data sets and uncertain data, in general, is less well studied. In this work, we present an approach for the computation and visual representation of confidence intervals for the occurrence probabilities of critical points in ensemble data sets. We demonstrate the added value of our approach over existing methods for critical point prediction in uncertain data on a synthetic data set and show its applicability to a data set from climate research.
{"title":"Visualizing Confidence Intervals for Critical Point Probabilities in 2D Scalar Field Ensembles","authors":"Dominik Vietinghoff, M. Böttinger, G. Scheuermann, Christian Heine","doi":"10.1109/VIS54862.2022.00038","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00038","url":null,"abstract":"An important task in visualization is the extraction and highlighting of dominant features in data to support users in their analysis process. Topological methods are a well-known means of identifying such features in deterministic fields. However, many real-world phenom-ena studied today are the result of a chaotic system that cannot be fully described by a single simulation. Instead, the variability of such systems is usually captured with ensemble simulations that pro-duce a variety of possible outcomes of the simulated process. The topological analysis of such ensemble data sets and uncertain data, in general, is less well studied. In this work, we present an approach for the computation and visual representation of confidence intervals for the occurrence probabilities of critical points in ensemble data sets. We demonstrate the added value of our approach over existing methods for critical point prediction in uncertain data on a synthetic data set and show its applicability to a data set from climate research.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125976075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-13DOI: 10.1109/VIS54862.2022.00034
Daniel Braun, K. Ebell, V. Schemann, L. Pelchmann, S. Crewell, R. Borgo, T. V. Landesberger
This paper presents a novel color scheme designed to address the challenge of visualizing data series with large value ranges, where scale transformation provides limited support. We focus on meteo-rological data, where the presence of large value ranges is common. We apply our approach to meteorological scatterplots, as one of the most common plots used in this domain area. Our approach leverages the numerical representation of mantissa and exponent of the values to guide the design of novel “nested” color schemes, able to emphasize differences between magnitudes. Our user study evaluates the new designs, the state of the art color scales and rep-resentative color schemes used in the analysis of meteorological data: ColorCrafter, Viridis, and Rainbow. We assess accuracy, time and confidence in the context of discrimination (comparison) and interpretation (reading) tasks. Our proposed color scheme signifi-cantly outperforms the others in interpretation tasks, while showing comparable performances in discrimination tasks.
{"title":"Color Coding of Large Value Ranges Applied to Meteorological Data","authors":"Daniel Braun, K. Ebell, V. Schemann, L. Pelchmann, S. Crewell, R. Borgo, T. V. Landesberger","doi":"10.1109/VIS54862.2022.00034","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00034","url":null,"abstract":"This paper presents a novel color scheme designed to address the challenge of visualizing data series with large value ranges, where scale transformation provides limited support. We focus on meteo-rological data, where the presence of large value ranges is common. We apply our approach to meteorological scatterplots, as one of the most common plots used in this domain area. Our approach leverages the numerical representation of mantissa and exponent of the values to guide the design of novel “nested” color schemes, able to emphasize differences between magnitudes. Our user study evaluates the new designs, the state of the art color scales and rep-resentative color schemes used in the analysis of meteorological data: ColorCrafter, Viridis, and Rainbow. We assess accuracy, time and confidence in the context of discrimination (comparison) and interpretation (reading) tasks. Our proposed color scheme signifi-cantly outperforms the others in interpretation tasks, while showing comparable performances in discrimination tasks.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115810707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-13DOI: 10.1109/VIS54862.2022.00039
Marina Evers, R. Wittkowski, L. Linsen
Simulation ensembles are a common tool in physics for understanding how a model outcome depends on input parameters. We analyze an active particle system, where each particle can use energy from its surroundings to propel itself. A multi-dimensional feature vector containing all particles' motion information can describe the whole system at each time step. The system's behavior strongly depends on input parameters like the propulsion mechanism of the particles. To understand how the time-varying behavior depends on the input parameters, it is necessary to introduce new measures to quantify the difference of the dynamics of the ensemble members. We propose a tool that supports the interactive visual analysis of time-varying feature-vector ensembles. A core component of our tool allows for the interactive definition and refinement of new measures that can then be used to understand the system's behavior and compare the ensemble members. Different visualizations support the user in finding a characteristic measure for the system. By visualizing the user-defined measure, the user can then investigate the parameter dependencies and gain insights into the relationship between input parameters and simulation output.
{"title":"ASEVis: Visual Exploration of Active System Ensembles to Define Characteristic Measures","authors":"Marina Evers, R. Wittkowski, L. Linsen","doi":"10.1109/VIS54862.2022.00039","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00039","url":null,"abstract":"Simulation ensembles are a common tool in physics for understanding how a model outcome depends on input parameters. We analyze an active particle system, where each particle can use energy from its surroundings to propel itself. A multi-dimensional feature vector containing all particles' motion information can describe the whole system at each time step. The system's behavior strongly depends on input parameters like the propulsion mechanism of the particles. To understand how the time-varying behavior depends on the input parameters, it is necessary to introduce new measures to quantify the difference of the dynamics of the ensemble members. We propose a tool that supports the interactive visual analysis of time-varying feature-vector ensembles. A core component of our tool allows for the interactive definition and refinement of new measures that can then be used to understand the system's behavior and compare the ensemble members. Different visualizations support the user in finding a characteristic measure for the system. By visualizing the user-defined measure, the user can then investigate the parameter dependencies and gain insights into the relationship between input parameters and simulation output.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116064890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-13DOI: 10.1109/VIS54862.2022.00012
Hannah K. Bako, Alisha Varma, Anuoluwapo Faboro, Mahreen Haider, Favour Nerrise, B. Kenah, L. Battle
D3 is arguably the most popular tool for implementing web-based visualizations. Yet D3 has a steep learning curve that may hinder its adoption and continued use. To simplify the process of programming D3 visualizations, we must first understand the space of implementation practices that D3 users engage in. We present a qualitative analysis of 2500 D3 visualizations and their corresponding imple-mentations. We find that 5 visualization types (Bar Charts, Geomaps, Line Charts, Scatterplots, and Force Directed Graphs) account for 80% of D3 visualizations found in our corpus. While implementation styles vary slightly across designs, the underlying code structure for all visualization types remains the same; presenting an opportunity for code reuse. Using our corpus of D3 examples, we synthesize reusable code templates for eight popular D3 visualization types and share them in our open source repository. Based on our results, we discuss design considerations for leveraging users' implementation patterns to reduce visualization design effort through design templates and auto-generated code recommendations.
{"title":"Streamlining Visualization Authoring in D3 Through User-Driven Templates","authors":"Hannah K. Bako, Alisha Varma, Anuoluwapo Faboro, Mahreen Haider, Favour Nerrise, B. Kenah, L. Battle","doi":"10.1109/VIS54862.2022.00012","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00012","url":null,"abstract":"D3 is arguably the most popular tool for implementing web-based visualizations. Yet D3 has a steep learning curve that may hinder its adoption and continued use. To simplify the process of programming D3 visualizations, we must first understand the space of implementation practices that D3 users engage in. We present a qualitative analysis of 2500 D3 visualizations and their corresponding imple-mentations. We find that 5 visualization types (Bar Charts, Geomaps, Line Charts, Scatterplots, and Force Directed Graphs) account for 80% of D3 visualizations found in our corpus. While implementation styles vary slightly across designs, the underlying code structure for all visualization types remains the same; presenting an opportunity for code reuse. Using our corpus of D3 examples, we synthesize reusable code templates for eight popular D3 visualization types and share them in our open source repository. Based on our results, we discuss design considerations for leveraging users' implementation patterns to reduce visualization design effort through design templates and auto-generated code recommendations.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131159908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-12DOI: 10.1109/VIS54862.2022.00027
Marc Satkowski, F. Kessler, S. Narciss, Raimund Dachselt
The ability to read, understand, and comprehend visual information representations is subsumed under the term visualization literacy (VL). One possibility to improve the use of information visualizations is to introduce adaptations. However, it is yet unclear whether people with different VL benefit from adaptations to the same degree. We conducted an online experiment (n = 42) to investigate whether the effect of an adaptation (here: De-Emphasis) of visualizations (bar charts, scatter plots) on performance (accuracy, time) and user experiences depends on users' VL level. Using linear mixed models for the analyses, we found a positive impact of the De-Emphasis adaptation across all conditions, as well as an interaction effect of adaptation and VL on the task completion time for bar charts. This work contributes to a better understanding of the intertwined relationship of VL and visual adaptations and motivates future research.
{"title":"Who benefits from Visualization Adaptations? Towards a better Understanding of the Influence of Visualization Literacy","authors":"Marc Satkowski, F. Kessler, S. Narciss, Raimund Dachselt","doi":"10.1109/VIS54862.2022.00027","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00027","url":null,"abstract":"The ability to read, understand, and comprehend visual information representations is subsumed under the term visualization literacy (VL). One possibility to improve the use of information visualizations is to introduce adaptations. However, it is yet unclear whether people with different VL benefit from adaptations to the same degree. We conducted an online experiment (n = 42) to investigate whether the effect of an adaptation (here: De-Emphasis) of visualizations (bar charts, scatter plots) on performance (accuracy, time) and user experiences depends on users' VL level. Using linear mixed models for the analyses, we found a positive impact of the De-Emphasis adaptation across all conditions, as well as an interaction effect of adaptation and VL on the task completion time for bar charts. This work contributes to a better understanding of the intertwined relationship of VL and visual adaptations and motivates future research.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127380681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/VIS54862.2022.00010
Rishab Mitra, Arpit Narechania, A. Endert, J. Stasko
Natural language (NL) toolkits enable visualization developers, who may not have a background in natural language processing (NLP), to create natural language interfaces (NLIs) for end-users to flexibly specify and interact with visualizations. However, these toolkits currently only support one-off utterances, with minimal capability to facilitate a multi-turn dialog between the user and the system. Developing NLIs with such conversational interaction capabilities remains a challenging task, requiring implementations of low-level NLP techniques to process a new query as an intent to follow-up on an older query. We extend an existing Python-based toolkit, NL4DV, that processes an NL query about a tabular dataset and returns an analytic specification containing data attributes, analytic tasks, and relevant visualizations, modeled as a JSON object. Specifically, NL4DV now enables developers to facilitate multiple simultaneous conversations about a dataset and resolve associated ambiguities, augmenting new conversational information into the output JSON object. We demonstrate these capabilities through three examples: (1) an NLI to learn aspects of the Vega-Lite grammar, (2) a mind mapping application to create free-flowing conversations, and (3) a chatbot to answer questions and resolve ambiguities.
{"title":"Facilitating Conversational Interaction in Natural Language Interfaces for Visualization","authors":"Rishab Mitra, Arpit Narechania, A. Endert, J. Stasko","doi":"10.1109/VIS54862.2022.00010","DOIUrl":"https://doi.org/10.1109/VIS54862.2022.00010","url":null,"abstract":"Natural language (NL) toolkits enable visualization developers, who may not have a background in natural language processing (NLP), to create natural language interfaces (NLIs) for end-users to flexibly specify and interact with visualizations. However, these toolkits currently only support one-off utterances, with minimal capability to facilitate a multi-turn dialog between the user and the system. Developing NLIs with such conversational interaction capabilities remains a challenging task, requiring implementations of low-level NLP techniques to process a new query as an intent to follow-up on an older query. We extend an existing Python-based toolkit, NL4DV, that processes an NL query about a tabular dataset and returns an analytic specification containing data attributes, analytic tasks, and relevant visualizations, modeled as a JSON object. Specifically, NL4DV now enables developers to facilitate multiple simultaneous conversations about a dataset and resolve associated ambiguities, augmenting new conversational information into the output JSON object. We demonstrate these capabilities through three examples: (1) an NLI to learn aspects of the Vega-Lite grammar, (2) a mind mapping application to create free-flowing conversations, and (3) a chatbot to answer questions and resolve ambiguities.","PeriodicalId":190244,"journal":{"name":"2022 IEEE Visualization and Visual Analytics (VIS)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128094604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}