Pub Date : 2021-04-01DOI: 10.1109/PacificVis52677.2021.00024
Sarah Goodwin, S. Meier, L. Bartram, Alex Godwin, T. Nagel, M. Dörk
Effective use of data is an essential asset to modern cities. Visualization as a tool for analysis, exploration, and communication has become a driving force in the task of unravelling our complex urban fabrics. This paper outlines the findings from a series of three workshops from 2018-2020 bringing together experts in urban data visualization with the aim of exploring multidisciplinary perspectives from the human-centric lens. Based on the rich and detailed workshop discussions identifying challenges and opportunities for urban data visualization research, we outline major human-centric themes and considerations fundamental for CityVis design and introduce a framework for an urban visualization design space.
{"title":"Unravelling the Human Perspective and Considerations for Urban Data Visualization","authors":"Sarah Goodwin, S. Meier, L. Bartram, Alex Godwin, T. Nagel, M. Dörk","doi":"10.1109/PacificVis52677.2021.00024","DOIUrl":"https://doi.org/10.1109/PacificVis52677.2021.00024","url":null,"abstract":"Effective use of data is an essential asset to modern cities. Visualization as a tool for analysis, exploration, and communication has become a driving force in the task of unravelling our complex urban fabrics. This paper outlines the findings from a series of three workshops from 2018-2020 bringing together experts in urban data visualization with the aim of exploring multidisciplinary perspectives from the human-centric lens. Based on the rich and detailed workshop discussions identifying challenges and opportunities for urban data visualization research, we outline major human-centric themes and considerations fundamental for CityVis design and introduce a framework for an urban visualization design space.","PeriodicalId":199565,"journal":{"name":"2021 IEEE 14th Pacific Visualization Symposium (PacificVis)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121502731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1109/PacificVis52677.2021.00017
Dominik Vietinghoff, Christian Heine, M. Böttinger, N. Maher, J. Jungclaus, G. Scheuermann
A driving factor of the winter weather in Western Europe is the North Atlantic Oscillation (NAO), manifested by fluctuations in the difference of sea level pressure between the Icelandic Low and the Azores High. Different methods have been developed that describe the strength of this oscillation, but they rely on certain assumptions, e.g., fixed positions of these two pressure systems. It is possible that climate change affects the mean location of both the Low and the High and thus the validity of these descriptive methods. This study is the first to visually analyze large ensemble climate change simulations (the MPI Grand Ensemble) to robustly assess shifts of the drivers of the NAO phenomenon using the uncertain northern hemispheric surface pressure fields. For this, we use a sliding window approach and compute empirical orthogonal functions (EOFs) for each window and ensemble member, then compare the uncertainty of local extrema in the results as well as their temporal evolution across different CO2 scenarios. We find systematic northeastward shifts in the location of the pressure systems that correlate with the simulated warming. Applying visualization techniques for this analysis was not straightforward; we reflect and give some lessons learned for the field of visualization.
{"title":"Visual Analysis of Spatio-Temporal Trends in Time-Dependent Ensemble Data Sets on the Example of the North Atlantic Oscillation","authors":"Dominik Vietinghoff, Christian Heine, M. Böttinger, N. Maher, J. Jungclaus, G. Scheuermann","doi":"10.1109/PacificVis52677.2021.00017","DOIUrl":"https://doi.org/10.1109/PacificVis52677.2021.00017","url":null,"abstract":"A driving factor of the winter weather in Western Europe is the North Atlantic Oscillation (NAO), manifested by fluctuations in the difference of sea level pressure between the Icelandic Low and the Azores High. Different methods have been developed that describe the strength of this oscillation, but they rely on certain assumptions, e.g., fixed positions of these two pressure systems. It is possible that climate change affects the mean location of both the Low and the High and thus the validity of these descriptive methods. This study is the first to visually analyze large ensemble climate change simulations (the MPI Grand Ensemble) to robustly assess shifts of the drivers of the NAO phenomenon using the uncertain northern hemispheric surface pressure fields. For this, we use a sliding window approach and compute empirical orthogonal functions (EOFs) for each window and ensemble member, then compare the uncertainty of local extrema in the results as well as their temporal evolution across different CO2 scenarios. We find systematic northeastward shifts in the location of the pressure systems that correlate with the simulated warming. Applying visualization techniques for this analysis was not straightforward; we reflect and give some lessons learned for the field of visualization.","PeriodicalId":199565,"journal":{"name":"2021 IEEE 14th Pacific Visualization Symposium (PacificVis)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123214087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1109/PacificVis52677.2021.00026
Boyan Zheng, F. Sadlo
In this paper, we study the visual design of hierarchical multivariate data analysis. We focus on the extension of four hierarchical univariate concepts—the sunburst chart, the icicle plot, the circular treemap, and the bubble treemap—to the multivariate domain. Our study identifies several advantageous design variants, which we discuss with respect to previous approaches, and whose utility we evaluate with a user study and demonstrate for different analysis purposes and different types of data.
{"title":"On the Visualization of Hierarchical Multivariate Data","authors":"Boyan Zheng, F. Sadlo","doi":"10.1109/PacificVis52677.2021.00026","DOIUrl":"https://doi.org/10.1109/PacificVis52677.2021.00026","url":null,"abstract":"In this paper, we study the visual design of hierarchical multivariate data analysis. We focus on the extension of four hierarchical univariate concepts—the sunburst chart, the icicle plot, the circular treemap, and the bubble treemap—to the multivariate domain. Our study identifies several advantageous design variants, which we discuss with respect to previous approaches, and whose utility we evaluate with a user study and demonstrate for different analysis purposes and different types of data.","PeriodicalId":199565,"journal":{"name":"2021 IEEE 14th Pacific Visualization Symposium (PacificVis)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129029519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1109/PacificVis52677.2021.00025
V. Filipov, Alessio Arleo, S. Miksch
A temporal graph stores and reflects temporal information associated with its entities and relationships. Such graphs can be utilized to model a broad variety of problems in a multitude of domains. Re-searchers from different fields of expertise are increasingly applying graph visualization and analysis to explore unknown phenomena, complex emerging structures, and changes occurring over time in their data. While several empirical studies evaluate the benefits and drawbacks of different network representations, visualizing the temporal dimension in graphs still presents an open challenge. In this paper we propose an exploratory user study with the aim of evaluating different combinations of graph representations, namely node-link and adjacency matrix, and temporal encodings, such as superimposition, juxtaposition and animation, on typical temporal tasks. The study participants expressed positive feedback toward matrix representations, with generally quicker and more accurate responses than with the node-link representation.
{"title":"Exploratory User Study on Graph Temporal Encodings","authors":"V. Filipov, Alessio Arleo, S. Miksch","doi":"10.1109/PacificVis52677.2021.00025","DOIUrl":"https://doi.org/10.1109/PacificVis52677.2021.00025","url":null,"abstract":"A temporal graph stores and reflects temporal information associated with its entities and relationships. Such graphs can be utilized to model a broad variety of problems in a multitude of domains. Re-searchers from different fields of expertise are increasingly applying graph visualization and analysis to explore unknown phenomena, complex emerging structures, and changes occurring over time in their data. While several empirical studies evaluate the benefits and drawbacks of different network representations, visualizing the temporal dimension in graphs still presents an open challenge. In this paper we propose an exploratory user study with the aim of evaluating different combinations of graph representations, namely node-link and adjacency matrix, and temporal encodings, such as superimposition, juxtaposition and animation, on typical temporal tasks. The study participants expressed positive feedback toward matrix representations, with generally quicker and more accurate responses than with the node-link representation.","PeriodicalId":199565,"journal":{"name":"2021 IEEE 14th Pacific Visualization Symposium (PacificVis)","volume":"128 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131485869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1109/PacificVis52677.2021.00013
Yashvir S. Grewal, Sarah Goodwin, Tim Dwyer
Increased reliance on data in decision-making has highlighted the importance of conveying uncertainty in data visualisations. Yet developing visualisation techniques that clearly and accurately convey uncertainty in data is an open challenge across a variety of fields. This is especially the case when visualising temporal uncertainty. To facilitate the development of innovative and accessible temporal uncertainty visualisation techniques and respond to an identified gap in the literature, we propose the first-ever survey of over 50 temporal uncertainty visualisation techniques deployed in numerous fields. Our paper offers two contributions. First, we propose a novel taxonomy to be applied when classifying temporal uncertainty visualisation techniques. This takes into account the visualisation’s intended audience, as well as its level of discreteness in representing uncertainty. Second, we urge researchers and practitioners to use a greater variety of visualisations which differ in terms of their discreteness. In doing so, we believe that a more robust evaluation of visualisation techniques can be achieved.
{"title":"Visualising Temporal Uncertainty: A Taxonomy and Call for Systematic Evaluation","authors":"Yashvir S. Grewal, Sarah Goodwin, Tim Dwyer","doi":"10.1109/PacificVis52677.2021.00013","DOIUrl":"https://doi.org/10.1109/PacificVis52677.2021.00013","url":null,"abstract":"Increased reliance on data in decision-making has highlighted the importance of conveying uncertainty in data visualisations. Yet developing visualisation techniques that clearly and accurately convey uncertainty in data is an open challenge across a variety of fields. This is especially the case when visualising temporal uncertainty. To facilitate the development of innovative and accessible temporal uncertainty visualisation techniques and respond to an identified gap in the literature, we propose the first-ever survey of over 50 temporal uncertainty visualisation techniques deployed in numerous fields. Our paper offers two contributions. First, we propose a novel taxonomy to be applied when classifying temporal uncertainty visualisation techniques. This takes into account the visualisation’s intended audience, as well as its level of discreteness in representing uncertainty. Second, we urge researchers and practitioners to use a greater variety of visualisations which differ in terms of their discreteness. In doing so, we believe that a more robust evaluation of visualisation techniques can be achieved.","PeriodicalId":199565,"journal":{"name":"2021 IEEE 14th Pacific Visualization Symposium (PacificVis)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124812276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1109/PacificVis52677.2021.00034
Yamei Tu, Jiayi Xu, Han-Wei Shen
With the high growth rate of text data, extracting meaningful information from a large corpus becomes increasingly difficult. Keyword extraction and analysis is a common approach to tackle the problem, but it is non-trivial to identify important words in the text and represent the multifaceted properties of those words effectively. Traditional topic modeling based keyword analysis algorithms require hyper-parameters which are often difficult to tune without enough prior knowledge. In addition, the relationships among the keywords are often difficult to obtain. In this paper, we utilize the attention scores extracted from Transformer-based language models to capture word relationships. We propose a domain-driven attention tuning method, guiding the attention to learn domain-specific word relationships. From the attention, we build a keyword network and propose a novel algorithm, Attention-based Word Influence (AWI), to compute how influential each word is in the network. An interactive visual analytics system, KeywordMap, is developed to support multi-level analysis of keywords and keyword relationships through coordinated views. We measure the quality of keywords captured by our AWI algorithm quantitatively. We also evaluate the usefulness and effectiveness of KeywordMap through case studies.
{"title":"KeywordMap: Attention-based Visual Exploration for Keyword Analysis","authors":"Yamei Tu, Jiayi Xu, Han-Wei Shen","doi":"10.1109/PacificVis52677.2021.00034","DOIUrl":"https://doi.org/10.1109/PacificVis52677.2021.00034","url":null,"abstract":"With the high growth rate of text data, extracting meaningful information from a large corpus becomes increasingly difficult. Keyword extraction and analysis is a common approach to tackle the problem, but it is non-trivial to identify important words in the text and represent the multifaceted properties of those words effectively. Traditional topic modeling based keyword analysis algorithms require hyper-parameters which are often difficult to tune without enough prior knowledge. In addition, the relationships among the keywords are often difficult to obtain. In this paper, we utilize the attention scores extracted from Transformer-based language models to capture word relationships. We propose a domain-driven attention tuning method, guiding the attention to learn domain-specific word relationships. From the attention, we build a keyword network and propose a novel algorithm, Attention-based Word Influence (AWI), to compute how influential each word is in the network. An interactive visual analytics system, KeywordMap, is developed to support multi-level analysis of keywords and keyword relationships through coordinated views. We measure the quality of keywords captured by our AWI algorithm quantitatively. We also evaluate the usefulness and effectiveness of KeywordMap through case studies.","PeriodicalId":199565,"journal":{"name":"2021 IEEE 14th Pacific Visualization Symposium (PacificVis)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127072864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1109/PacificVis52677.2021.00018
Youngtaek Kim, Hyeon Jeon, Young-Ho Kim, Yuhoon Ki, Hyunjoo Song, Jinwook Seo
Finding the propagation scope for various types of issues in Software Product Lines (SPLs) is a complicated Multi-Criteria Decision Making (MCDM) problem. This task often requires human-in-the-loop data analysis, which covers not only multiple product attributes but also contextual information (e.g., internal policy, customer requirements, exceptional cases, cost efficiency). We propose an interactive visualization tool to support MCDM tasks in software issue propagation based on the user’s mental model. Our tool enables users to explore multiple criteria with their insight intuitively and find the appropriate propagation scope.
{"title":"Visualization Support for Multi-criteria Decision Making in Software Issue Propagation","authors":"Youngtaek Kim, Hyeon Jeon, Young-Ho Kim, Yuhoon Ki, Hyunjoo Song, Jinwook Seo","doi":"10.1109/PacificVis52677.2021.00018","DOIUrl":"https://doi.org/10.1109/PacificVis52677.2021.00018","url":null,"abstract":"Finding the propagation scope for various types of issues in Software Product Lines (SPLs) is a complicated Multi-Criteria Decision Making (MCDM) problem. This task often requires human-in-the-loop data analysis, which covers not only multiple product attributes but also contextual information (e.g., internal policy, customer requirements, exceptional cases, cost efficiency). We propose an interactive visualization tool to support MCDM tasks in software issue propagation based on the user’s mental model. Our tool enables users to explore multiple criteria with their insight intuitively and find the appropriate propagation scope.","PeriodicalId":199565,"journal":{"name":"2021 IEEE 14th Pacific Visualization Symposium (PacificVis)","volume":"22 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120840045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lithium ion batteries (LIBs) are widely used as the important energy sources in our daily life such as mobile phones, electric vehicles, and drones etc. Due to the potential safety risks caused by liquid electrolytes, the experts have tried to replace liquid electrolytes with solid ones. However, it is very difficult to find suitable alternatives materials in traditional ways for its incredible high cost in searching. Machine learning (ML) based methods are currently introduced and used for material prediction. But there is rarely an assisting learning tools designed for domain experts for institutive performance comparison and analysis of ML model. In this case, we propose an interactive visualization system for experts to select suitable ML models, understand and explore the predication results comprehensively. Our system employs a multi-faceted visualization scheme designed to support analysis from the perspective of feature composition, data similarity, model performance, and results presentation. A case study with real experiments in lab has been taken by the expert and the results of confirmed the effectiveness and helpfulness of our system.
{"title":"Visual Analysis on Machine Learning Assisted Prediction of Ionic Conductivity for Solid-State Electrolytes","authors":"Hui Shao, J. Pu, Yanlin Zhu, Boyang Gao, Zhengguo Zhu, Yunbo Rao","doi":"10.1109/PacificVis52677.2021.00038","DOIUrl":"https://doi.org/10.1109/PacificVis52677.2021.00038","url":null,"abstract":"Lithium ion batteries (LIBs) are widely used as the important energy sources in our daily life such as mobile phones, electric vehicles, and drones etc. Due to the potential safety risks caused by liquid electrolytes, the experts have tried to replace liquid electrolytes with solid ones. However, it is very difficult to find suitable alternatives materials in traditional ways for its incredible high cost in searching. Machine learning (ML) based methods are currently introduced and used for material prediction. But there is rarely an assisting learning tools designed for domain experts for institutive performance comparison and analysis of ML model. In this case, we propose an interactive visualization system for experts to select suitable ML models, understand and explore the predication results comprehensively. Our system employs a multi-faceted visualization scheme designed to support analysis from the perspective of feature composition, data similarity, model performance, and results presentation. A case study with real experiments in lab has been taken by the expert and the results of confirmed the effectiveness and helpfulness of our system.","PeriodicalId":199565,"journal":{"name":"2021 IEEE 14th Pacific Visualization Symposium (PacificVis)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123455110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1109/PacificVis52677.2021.00012
Spandan Madan, Z. Bylinskii, C. Nobre, Matthew Tancik, Adrià Recasens, Kimberli Zhong, Sami Alsheikh, A. Oliva, F. Durand, H. Pfister
Widely used in news, business, and educational media, infographics are handcrafted to effectively communicate messages about complex and often abstract topics including ‘ways to conserve the environment’ and ‘coronavirus prevention’. The computational understanding of infographics required for future applications like automatic captioning, summarization, search, and question-answering, will depend on being able to parse the visual and textual elements contained within. However, being composed of stylistically and semantically diverse visual and textual elements, infographics pose challenges for current A.I. systems. While automatic text extraction works reasonably well on infographics, standard object detection algorithms fail to identify the stand-alone visual elements in infographics that we refer to as ‘icons’. In this paper, we propose a novel approach to train an object detector using synthetically-generated data, and show that it succeeds at generalizing to detecting icons within in-the-wild infographics. We further pair our icon detection approach with an icon classifier and a state-of-the-art text detector to demonstrate three demo applications: topic prediction, multi-modal summarization, and multi-modal search. Parsing the visual and textual elements within infographics provides us with the first steps towards automatic infographic understanding.
{"title":"Parsing and Summarizing Infographics with Synthetically Trained Icon Detection","authors":"Spandan Madan, Z. Bylinskii, C. Nobre, Matthew Tancik, Adrià Recasens, Kimberli Zhong, Sami Alsheikh, A. Oliva, F. Durand, H. Pfister","doi":"10.1109/PacificVis52677.2021.00012","DOIUrl":"https://doi.org/10.1109/PacificVis52677.2021.00012","url":null,"abstract":"Widely used in news, business, and educational media, infographics are handcrafted to effectively communicate messages about complex and often abstract topics including ‘ways to conserve the environment’ and ‘coronavirus prevention’. The computational understanding of infographics required for future applications like automatic captioning, summarization, search, and question-answering, will depend on being able to parse the visual and textual elements contained within. However, being composed of stylistically and semantically diverse visual and textual elements, infographics pose challenges for current A.I. systems. While automatic text extraction works reasonably well on infographics, standard object detection algorithms fail to identify the stand-alone visual elements in infographics that we refer to as ‘icons’. In this paper, we propose a novel approach to train an object detector using synthetically-generated data, and show that it succeeds at generalizing to detecting icons within in-the-wild infographics. We further pair our icon detection approach with an icon classifier and a state-of-the-art text detector to demonstrate three demo applications: topic prediction, multi-modal summarization, and multi-modal search. Parsing the visual and textual elements within infographics provides us with the first steps towards automatic infographic understanding.","PeriodicalId":199565,"journal":{"name":"2021 IEEE 14th Pacific Visualization Symposium (PacificVis)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130490207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1109/PacificVis52677.2021.00010
Can Liu, Yun Han, Ruike Jiang, Xiaoru Yuan
We propose an automatic pipeline to generate visualization with annotations to answer natural-language questions raised by the public on tabular data. With a pre-trained language representation model, the input natural language questions and table headers are first encoded into vectors. According to these vectors, a multi-task end-to-end deep neural network extracts related data areas and corresponding aggregation type. We present the result with carefully designed visualization and annotations for different attribute types and tasks. We conducted a comparison experiment with state-of-the-art works and the best commercial tools. The results show that our method outperforms those works with higher accuracy and more effective visualization.
{"title":"ADVISor: Automatic Visualization Answer for Natural-Language Question on Tabular Data","authors":"Can Liu, Yun Han, Ruike Jiang, Xiaoru Yuan","doi":"10.1109/PacificVis52677.2021.00010","DOIUrl":"https://doi.org/10.1109/PacificVis52677.2021.00010","url":null,"abstract":"We propose an automatic pipeline to generate visualization with annotations to answer natural-language questions raised by the public on tabular data. With a pre-trained language representation model, the input natural language questions and table headers are first encoded into vectors. According to these vectors, a multi-task end-to-end deep neural network extracts related data areas and corresponding aggregation type. We present the result with carefully designed visualization and annotations for different attribute types and tasks. We conducted a comparison experiment with state-of-the-art works and the best commercial tools. The results show that our method outperforms those works with higher accuracy and more effective visualization.","PeriodicalId":199565,"journal":{"name":"2021 IEEE 14th Pacific Visualization Symposium (PacificVis)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114810292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}