Pub Date : 2020-06-01DOI: 10.1109/PacificVis48177.2020.7614
Max Sondag, Wouter Meulemans, C. Schulz, Kevin Verbeek, D. Weiskopf, B. Speckmann
Rectangular treemaps visualize hierarchical numerical data by recursively partitioning an input rectangle into smaller rectangles whose areas match the data. Numerical data often has uncertainty associated with it. To visualize uncertainty in a rectangular treemap, we identify two conflicting key requirements: (i) to assess the data value of a node in the hierarchy, the area of its rectangle should directly match its data value, and (ii) to facilitate comparison between data and uncertainty, uncertainty should be encoded using the same visual variable as the data, that is, area. We present Uncertainty Treemaps, which meet both requirements simultaneously by introducing the concept of hierarchical uncertainty masks. First, we define a new cost function that measures the quality of Uncertainty Treemaps. Then, we show how to adapt existing treemapping algorithms to support uncertainty masks. Finally, we demonstrate the usefulness and quality of our technique through an expert review and a computational experiment on real-world datasets.
{"title":"Uncertainty Treemaps","authors":"Max Sondag, Wouter Meulemans, C. Schulz, Kevin Verbeek, D. Weiskopf, B. Speckmann","doi":"10.1109/PacificVis48177.2020.7614","DOIUrl":"https://doi.org/10.1109/PacificVis48177.2020.7614","url":null,"abstract":"Rectangular treemaps visualize hierarchical numerical data by recursively partitioning an input rectangle into smaller rectangles whose areas match the data. Numerical data often has uncertainty associated with it. To visualize uncertainty in a rectangular treemap, we identify two conflicting key requirements: (i) to assess the data value of a node in the hierarchy, the area of its rectangle should directly match its data value, and (ii) to facilitate comparison between data and uncertainty, uncertainty should be encoded using the same visual variable as the data, that is, area. We present Uncertainty Treemaps, which meet both requirements simultaneously by introducing the concept of hierarchical uncertainty masks. First, we define a new cost function that measures the quality of Uncertainty Treemaps. Then, we show how to adapt existing treemapping algorithms to support uncertainty masks. Finally, we demonstrate the usefulness and quality of our technique through an expert review and a computational experiment on real-world datasets.","PeriodicalId":322092,"journal":{"name":"2020 IEEE Pacific Visualization Symposium (PacificVis)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122440849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/PacificVis48177.2020.6431
Xin Liang, Hanqi Guo, S. Di, F. Cappello, Mukund Raj, Chunhui Liu, K. Ono, Zizhong Chen, T. Peterka
The objective of this work is to develop error-bounded lossy compression methods to preserve topological features in 2D and 3D vector fields. Specifically, we explore the preservation of critical points in piecewise linear vector fields. We define the preservation of critical points as, without any false positive, false negative, or false type change in the decompressed data, (1) keeping each critical point in its original cell and (2) retaining the type of each critical point (e.g., saddle and attracting node). The key to our method is to adapt a vertex-wise error bound for each grid point and to compress input data together with the error bound field using a modified lossy compressor. Our compression algorithm can be also embarrassingly parallelized for large data handling and in situ processing. We benchmark our method by comparing it with existing lossy compressors in terms of false positive/negative/type rates, compression ratio, and various vector field visualizations with several scientific applications.
{"title":"Toward Feature-Preserving 2D and 3D Vector Field Compression","authors":"Xin Liang, Hanqi Guo, S. Di, F. Cappello, Mukund Raj, Chunhui Liu, K. Ono, Zizhong Chen, T. Peterka","doi":"10.1109/PacificVis48177.2020.6431","DOIUrl":"https://doi.org/10.1109/PacificVis48177.2020.6431","url":null,"abstract":"The objective of this work is to develop error-bounded lossy compression methods to preserve topological features in 2D and 3D vector fields. Specifically, we explore the preservation of critical points in piecewise linear vector fields. We define the preservation of critical points as, without any false positive, false negative, or false type change in the decompressed data, (1) keeping each critical point in its original cell and (2) retaining the type of each critical point (e.g., saddle and attracting node). The key to our method is to adapt a vertex-wise error bound for each grid point and to compress input data together with the error bound field using a modified lossy compressor. Our compression algorithm can be also embarrassingly parallelized for large data handling and in situ processing. We benchmark our method by comparing it with existing lossy compressors in terms of false positive/negative/type rates, compression ratio, and various vector field visualizations with several scientific applications.","PeriodicalId":322092,"journal":{"name":"2020 IEEE Pacific Visualization Symposium (PacificVis)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125780418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/PacificVis48177.2020.1043
Can Liu, Liwenhan Xie, Yun Han, Datong Wei, Xiaoru Yuan
In this paper, we propose a novel approach to generate captions for visualization charts automatically. In the proposed method, visual marks and visual channels, together with the associated text information in the original charts, are first extracted and identified with a multilayer perceptron classifier. Meanwhile, data information can also be retrieved by parsing visual marks with extracted mapping relationships. Then a 1-D convolutional residual network is employed to analyze the relationship between visual elements, and recognize significant features of the visualization charts, with both data and visual information as input. In the final step, the full description of the visual charts can be generated through a template-based approach. The generated captions can effectively cover the main visual features of the visual charts and support major feature types in commons charts. We further demonstrate the effectiveness of our approach through several cases.
{"title":"AutoCaption: An Approach to Generate Natural Language Description from Visualization Automatically","authors":"Can Liu, Liwenhan Xie, Yun Han, Datong Wei, Xiaoru Yuan","doi":"10.1109/PacificVis48177.2020.1043","DOIUrl":"https://doi.org/10.1109/PacificVis48177.2020.1043","url":null,"abstract":"In this paper, we propose a novel approach to generate captions for visualization charts automatically. In the proposed method, visual marks and visual channels, together with the associated text information in the original charts, are first extracted and identified with a multilayer perceptron classifier. Meanwhile, data information can also be retrieved by parsing visual marks with extracted mapping relationships. Then a 1-D convolutional residual network is employed to analyze the relationship between visual elements, and recognize significant features of the visualization charts, with both data and visual information as input. In the final step, the full description of the visual charts can be generated through a template-based approach. The generated captions can effectively cover the main visual features of the visual charts and support major feature types in commons charts. We further demonstrate the effectiveness of our approach through several cases.","PeriodicalId":322092,"journal":{"name":"2020 IEEE Pacific Visualization Symposium (PacificVis)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125980729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/PacificVis48177.2020.3542
Junpeng Wang, Wei Zhang, Hao Yang
Two fundamental problems in machine learning are recognition and generation. Apart from the tremendous amount of research efforts devoted to these two problems individually, finding the association between them has attracted increasingly more attention recently. Symbol-Concept Association Network (SCAN) is one of the most popular models for this problem proposed by Google DeepMind lately, which integrates an unsupervised concept abstraction process and a supervised symbol-concept association process. Despite the outstanding performance of this deep neural network, interpreting and evaluating it remain challenging. Guided by the practical needs from deep learning experts, this paper proposes a visual analytics attempt, i.e., SCANViz, to address this challenge in the visual domain. Specifically, SCANViz evaluates the performance of SCAN through its power of recognition and generation, facilitates the exploration of the latent space derived from both the unsupervised extraction and supervised association processes, empowers interactive training of SCAN to interpret the model’s understanding on a particular visual concept. Through concrete case studies with multiple deep learning experts, we validate the effectiveness of SCANViz.
{"title":"SCANViz: Interpreting the Symbol-Concept Association Captured by Deep Neural Networks through Visual Analytics","authors":"Junpeng Wang, Wei Zhang, Hao Yang","doi":"10.1109/PacificVis48177.2020.3542","DOIUrl":"https://doi.org/10.1109/PacificVis48177.2020.3542","url":null,"abstract":"Two fundamental problems in machine learning are recognition and generation. Apart from the tremendous amount of research efforts devoted to these two problems individually, finding the association between them has attracted increasingly more attention recently. Symbol-Concept Association Network (SCAN) is one of the most popular models for this problem proposed by Google DeepMind lately, which integrates an unsupervised concept abstraction process and a supervised symbol-concept association process. Despite the outstanding performance of this deep neural network, interpreting and evaluating it remain challenging. Guided by the practical needs from deep learning experts, this paper proposes a visual analytics attempt, i.e., SCANViz, to address this challenge in the visual domain. Specifically, SCANViz evaluates the performance of SCAN through its power of recognition and generation, facilitates the exploration of the latent space derived from both the unsupervised extraction and supervised association processes, empowers interactive training of SCAN to interpret the model’s understanding on a particular visual concept. Through concrete case studies with multiple deep learning experts, we validate the effectiveness of SCANViz.","PeriodicalId":322092,"journal":{"name":"2020 IEEE Pacific Visualization Symposium (PacificVis)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130082818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/PacificVis48177.2020.1010
Taerin Yoon, Hyunwoo Han, Hyoji Ha, Juwon Hong, Kyungwon Lee
Understanding and maintaining the intended meaning of original text used for citations is essential for unbiased and accurate scholarly work. To this end, this study aims to provide a visual system for exploring the citation relationships and motivations for citations within papers. For this purpose, papers from the IEEE Information Visualization Conference that introduce research on data visualization were collected, and based on the internal citation relationships, citation sentences were extracted and the text were analyzed. In addition, a visualization interface was provided to identify the citation relationships, citation pattern information, and citing motivation. Lastly, the pattern analysis of the citation relationships along with the citing motivation and topic was demonstrated through a case study. Our paper exploring system can confirm the purpose of specific papers being cited by other authors. Furthermore, the findings can help identify the characteristics of related studies based on the target papers.
{"title":"A Conference Paper Exploring System Based on Citing Motivation and Topic","authors":"Taerin Yoon, Hyunwoo Han, Hyoji Ha, Juwon Hong, Kyungwon Lee","doi":"10.1109/PacificVis48177.2020.1010","DOIUrl":"https://doi.org/10.1109/PacificVis48177.2020.1010","url":null,"abstract":"Understanding and maintaining the intended meaning of original text used for citations is essential for unbiased and accurate scholarly work. To this end, this study aims to provide a visual system for exploring the citation relationships and motivations for citations within papers. For this purpose, papers from the IEEE Information Visualization Conference that introduce research on data visualization were collected, and based on the internal citation relationships, citation sentences were extracted and the text were analyzed. In addition, a visualization interface was provided to identify the citation relationships, citation pattern information, and citing motivation. Lastly, the pattern analysis of the citation relationships along with the citing motivation and topic was demonstrated through a case study. Our paper exploring system can confirm the purpose of specific papers being cited by other authors. Furthermore, the findings can help identify the characteristics of related studies based on the target papers.","PeriodicalId":322092,"journal":{"name":"2020 IEEE Pacific Visualization Symposium (PacificVis)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130704036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/PacificVis48177.2020.1007
Yu Dong, A. Fauth, M. Huang, Yi Chen, Jie Liang
Hierarchical structures are very common in the real world for recording all kinds of relational data generated in our daily life and business procedures. A very popular visualization method for displaying such data structures is called "Tree". So far, there are a variety of Tree visualization methods that have been proposed and most of them can only visualize one hierarchical dataset at a time. Hence, it raises the difficulty of comparison between two or more hierarchical datasets.In this paper, we proposed Pansy Tree which used a tree metaphor to visualize merged hierarchies. We design a unique icon named pansy to represent each merged node in the structure. Each Pansy is encoded by three colors mapping data items from three different datasets in the same hierarchical position (or tree node). The petals and sepal on Pansy are designed for showing each attribute’s values and hierarchical information. We also redefine the links in force layout encoded by width and animation to better convey hierarchical information. We further apply Pansy Tree into CNCEE datasets and demonstrate two use cases to verify its effectiveness.The main contribution of this work is to merge three datasets into one tree that makes it much easier to explore and compare the structures, data items and data attributes with visual tools.
{"title":"PansyTree: Merging Multiple Hierarchies","authors":"Yu Dong, A. Fauth, M. Huang, Yi Chen, Jie Liang","doi":"10.1109/PacificVis48177.2020.1007","DOIUrl":"https://doi.org/10.1109/PacificVis48177.2020.1007","url":null,"abstract":"Hierarchical structures are very common in the real world for recording all kinds of relational data generated in our daily life and business procedures. A very popular visualization method for displaying such data structures is called \"Tree\". So far, there are a variety of Tree visualization methods that have been proposed and most of them can only visualize one hierarchical dataset at a time. Hence, it raises the difficulty of comparison between two or more hierarchical datasets.In this paper, we proposed Pansy Tree which used a tree metaphor to visualize merged hierarchies. We design a unique icon named pansy to represent each merged node in the structure. Each Pansy is encoded by three colors mapping data items from three different datasets in the same hierarchical position (or tree node). The petals and sepal on Pansy are designed for showing each attribute’s values and hierarchical information. We also redefine the links in force layout encoded by width and animation to better convey hierarchical information. We further apply Pansy Tree into CNCEE datasets and demonstrate two use cases to verify its effectiveness.The main contribution of this work is to merge three datasets into one tree that makes it much easier to explore and compare the structures, data items and data attributes with visual tools.","PeriodicalId":322092,"journal":{"name":"2020 IEEE Pacific Visualization Symposium (PacificVis)","volume":"226 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121259664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/PacificVis48177.2020.9915
Ding-Bang Chen, Chien-Hsun Lai, Yun-Hsuan Lien, Yu-Hsuan Lin, Yu-Shuen Wang, K. Ma
In this paper, we present a visualization system for users to study multivariate time series data. They first identify trends or anomalies from a global view and then examine details in a local view. Specifically, we train a neural network to project high-dimensional data to a two dimensional (2D) planar space while retaining global data distances. By aligning the 2D points with a predefined color map, high-dimensional data can be represented by colors. Because perceptual color differentiation may fail to reflect data distance, we optimize perceptual color differentiation on each map region by deformation. The region with large perceptual color differentiation will expand, whereas the region with small differentiation will shrink. Since colors do not occupy any space in visualization, we convey the overview of multivariate time series data by a calendar view. Cells in the view are color-coded to represent multivariate data at different time spans. Users can observe color changes over time to identify events of interest. Afterward, they study details of an event by examining parallel coordinate plots. Cells in the calendar view and the parallel coordinate plots are dynamically linked for users to obtain insights that are barely noticeable in large datasets. The experiment results, comparisons, conducted case studies, and the user study indicate that our visualization system is feasible and effective.
{"title":"Representing Multivariate Data by Optimal Colors to Uncover Events of Interest in Time Series Data","authors":"Ding-Bang Chen, Chien-Hsun Lai, Yun-Hsuan Lien, Yu-Hsuan Lin, Yu-Shuen Wang, K. Ma","doi":"10.1109/PacificVis48177.2020.9915","DOIUrl":"https://doi.org/10.1109/PacificVis48177.2020.9915","url":null,"abstract":"In this paper, we present a visualization system for users to study multivariate time series data. They first identify trends or anomalies from a global view and then examine details in a local view. Specifically, we train a neural network to project high-dimensional data to a two dimensional (2D) planar space while retaining global data distances. By aligning the 2D points with a predefined color map, high-dimensional data can be represented by colors. Because perceptual color differentiation may fail to reflect data distance, we optimize perceptual color differentiation on each map region by deformation. The region with large perceptual color differentiation will expand, whereas the region with small differentiation will shrink. Since colors do not occupy any space in visualization, we convey the overview of multivariate time series data by a calendar view. Cells in the view are color-coded to represent multivariate data at different time spans. Users can observe color changes over time to identify events of interest. Afterward, they study details of an event by examining parallel coordinate plots. Cells in the calendar view and the parallel coordinate plots are dynamically linked for users to obtain insights that are barely noticeable in large datasets. The experiment results, comparisons, conducted case studies, and the user study indicate that our visualization system is feasible and effective.","PeriodicalId":322092,"journal":{"name":"2020 IEEE Pacific Visualization Symposium (PacificVis)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132470024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/PacificVis48177.2020.1031
Zhihang Dong, Tongshuang Sherry Wu, Sicheng Song, M. Zhang
Conventional attention visualization tools compromise either the readability or the information conveyed when documents are lengthy, especially when these documents have imbalanced sizes. Our work strives toward a more intuitive visualization for a subset of Natural Language Processing tasks, where attention is mapped between documents with imbalanced sizes. We extend the flow map visualization to enhance the readability of the attention-augmented documents. Through interaction, our design enables semantic filtering that helps users prioritize important tokens and meaningful matching for an in-depth exploration. Case studies and informal user studies in machine comprehension prove that our visualization effectively helps users gain initial understandings about what their models are "paying attention to." We discuss how the work can be extended to other domains, as well as being plugged into more end-to-end systems for model error analysis.
{"title":"Interactive Attention Model Explorer for Natural Language Processing Tasks with Unbalanced Data Sizes","authors":"Zhihang Dong, Tongshuang Sherry Wu, Sicheng Song, M. Zhang","doi":"10.1109/PacificVis48177.2020.1031","DOIUrl":"https://doi.org/10.1109/PacificVis48177.2020.1031","url":null,"abstract":"Conventional attention visualization tools compromise either the readability or the information conveyed when documents are lengthy, especially when these documents have imbalanced sizes. Our work strives toward a more intuitive visualization for a subset of Natural Language Processing tasks, where attention is mapped between documents with imbalanced sizes. We extend the flow map visualization to enhance the readability of the attention-augmented documents. Through interaction, our design enables semantic filtering that helps users prioritize important tokens and meaningful matching for an in-depth exploration. Case studies and informal user studies in machine comprehension prove that our visualization effectively helps users gain initial understandings about what their models are \"paying attention to.\" We discuss how the work can be extended to other domains, as well as being plugged into more end-to-end systems for model error analysis.","PeriodicalId":322092,"journal":{"name":"2020 IEEE Pacific Visualization Symposium (PacificVis)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130912970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}