Pub Date : 2015-04-14DOI: 10.1109/PACIFICVIS.2015.7156379
Yu Meng, Hui Zhang, Mengchen Liu, Shixia Liu
A high-quality label layout is critical for effective information understanding and consumption. Existing labeling methods fail to help users quickly gain an overview of visualized data when the number of labels is large. Visual clutter is a major challenge preventing these methods from being applied to real-world applications. To address this, we propose a context-aware label layout that can measure and reduce visual clutter during the layout process. Our method formulates the clutter model using four factors: confusion, visual connection, distance, and intersection. Based on this clutter model, an effective clutter-aware labeling method has been developed that can generate clear and legible label layouts in different visualizations. We have applied our method to several types of visualizations and the results show promise, especially in support of an uncluttered and informative label layout.
{"title":"Clutter-aware label layout","authors":"Yu Meng, Hui Zhang, Mengchen Liu, Shixia Liu","doi":"10.1109/PACIFICVIS.2015.7156379","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2015.7156379","url":null,"abstract":"A high-quality label layout is critical for effective information understanding and consumption. Existing labeling methods fail to help users quickly gain an overview of visualized data when the number of labels is large. Visual clutter is a major challenge preventing these methods from being applied to real-world applications. To address this, we propose a context-aware label layout that can measure and reduce visual clutter during the layout process. Our method formulates the clutter model using four factors: confusion, visual connection, distance, and intersection. Based on this clutter model, an effective clutter-aware labeling method has been developed that can generate clear and legible label layouts in different visualizations. We have applied our method to several types of visualizations and the results show promise, especially in support of an uncluttered and informative label layout.","PeriodicalId":177381,"journal":{"name":"2015 IEEE Pacific Visualization Symposium (PacificVis)","volume":" 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113951201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-14DOI: 10.1109/PACIFICVIS.2015.7156367
D. F. Prieto, Eva Hagen, D. Engel, Dirk Bayer, J. T. Hernández, C. Garth, Inga Scheler
The increasing amount of data generated by Location Based Social Networks (LBSN) such as Twitter, Flickr, or Foursquare, is currently drawing the attention of urban planners, as it is a new source of data that contains valuable information about the behavior of the inhabitants of a city. Making this data accessible to the urban planning domain can add value to the decision making processes. However, the analysis of the spatial and temporal characteristics of this data in the context of urban planning is an ongoing research problem. This paper describes ongoing work in the design and development of a visual exploration tool to facilitate this task. The proposed design provides an approach towards the integration of a visual exploration tool and the capabilities of a visual query system from a multilevel perspective (e.g., multiple spatial scales and temporal resolutions implicit in LBSN data). A preliminary discussion about the design and the potential insights that can be gained from the exploration and analysis of this data with the proposed tool is presented, along with the conclusions and future work for the continuation of this work.
基于位置的社交网络(Location Based Social Networks, LBSN)如Twitter、Flickr或Foursquare产生的数据量不断增加,目前正引起城市规划者的注意,因为它是一种新的数据来源,包含有关城市居民行为的有价值信息。使城市规划领域能够访问这些数据可以为决策过程增加价值。然而,在城市规划的背景下分析这些数据的时空特征是一个正在进行的研究问题。本文描述了正在进行的设计和开发可视化探索工具的工作,以促进这项任务。提出的设计提供了一种从多层角度(例如,LBSN数据中隐含的多个空间尺度和时间分辨率)集成视觉探索工具和视觉查询系统功能的方法。本文提出了关于设计的初步讨论,以及可以从使用建议的工具对这些数据进行探索和分析中获得的潜在见解,以及结论和继续这项工作的未来工作。
{"title":"Visual exploration of Location-Based Social Networks data in urban planning","authors":"D. F. Prieto, Eva Hagen, D. Engel, Dirk Bayer, J. T. Hernández, C. Garth, Inga Scheler","doi":"10.1109/PACIFICVIS.2015.7156367","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2015.7156367","url":null,"abstract":"The increasing amount of data generated by Location Based Social Networks (LBSN) such as Twitter, Flickr, or Foursquare, is currently drawing the attention of urban planners, as it is a new source of data that contains valuable information about the behavior of the inhabitants of a city. Making this data accessible to the urban planning domain can add value to the decision making processes. However, the analysis of the spatial and temporal characteristics of this data in the context of urban planning is an ongoing research problem. This paper describes ongoing work in the design and development of a visual exploration tool to facilitate this task. The proposed design provides an approach towards the integration of a visual exploration tool and the capabilities of a visual query system from a multilevel perspective (e.g., multiple spatial scales and temporal resolutions implicit in LBSN data). A preliminary discussion about the design and the potential insights that can be gained from the exploration and analysis of this data with the proposed tool is presented, along with the conclusions and future work for the continuation of this work.","PeriodicalId":177381,"journal":{"name":"2015 IEEE Pacific Visualization Symposium (PacificVis)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129698741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-14DOI: 10.1109/PACIFICVIS.2015.7156380
Chun-Ming Chen, Ayan Biswas, Han-Wei Shen
When the spatial and temporal resolutions of a time-varying simulation become very high, it is not possible to process or store data from every time step due to the high computation and storage cost. Although using uniformly down-sampled data for visualization is a common practice, important information in the un-stored data can be lost. Currently, linear interpolation is a popular method used to approximate data between the stored time steps. For pathline computation, however, errors from the interpolated velocity in the time dimension can accumulate quickly and make the trajectories rather unreliable. To inform the scientist the error involved in the visualization, it is important to quantify and display the uncertainty, and more importantly, to reduce the error whenever possible. In this paper, we present an algorithm to model temporal interpolation error, and an error reduction scheme to improve the data accuracy for temporally down-sampled data. We show that it is possible to compute polynomial regression and measure the interpolation errors incrementally with one sequential scan of the time-varying flow field. We also show empirically that when the data sequence is fitted with least-squares regression, the errors can be approximated with a Gaussian distribution. With the end positions of particle traces stored, we show that our error modeling scheme can better estimate the intermediate particle trajectories between the stored time steps based on a maximum likelihood method that utilizes forward and backward particle traces.
{"title":"Uncertainty modeling and error reduction for pathline computation in time-varying flow fields","authors":"Chun-Ming Chen, Ayan Biswas, Han-Wei Shen","doi":"10.1109/PACIFICVIS.2015.7156380","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2015.7156380","url":null,"abstract":"When the spatial and temporal resolutions of a time-varying simulation become very high, it is not possible to process or store data from every time step due to the high computation and storage cost. Although using uniformly down-sampled data for visualization is a common practice, important information in the un-stored data can be lost. Currently, linear interpolation is a popular method used to approximate data between the stored time steps. For pathline computation, however, errors from the interpolated velocity in the time dimension can accumulate quickly and make the trajectories rather unreliable. To inform the scientist the error involved in the visualization, it is important to quantify and display the uncertainty, and more importantly, to reduce the error whenever possible. In this paper, we present an algorithm to model temporal interpolation error, and an error reduction scheme to improve the data accuracy for temporally down-sampled data. We show that it is possible to compute polynomial regression and measure the interpolation errors incrementally with one sequential scan of the time-varying flow field. We also show empirically that when the data sequence is fitted with least-squares regression, the errors can be approximated with a Gaussian distribution. With the end positions of particle traces stored, we show that our error modeling scheme can better estimate the intermediate particle trajectories between the stored time steps based on a maximum likelihood method that utilizes forward and backward particle traces.","PeriodicalId":177381,"journal":{"name":"2015 IEEE Pacific Visualization Symposium (PacificVis)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126550612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-14DOI: 10.1109/PACIFICVIS.2015.7156371
A. Berres, H. Obermaier, K. Joy, H. Hagen
Time surfaces are a versatile tool to visualise advection and deformation in flow fields. Due to complex flow behaviours involving stretching, shearing, and folding, straightforward mesh-based representations of these surfaces can develop artefacts and degenerate quickly. Common counter-measures rely on refinement and adaptive insertion of new particles which lead to an unpredictable increase in memory requirements. We propose a novel time surface extraction technique that keeps the number of required flow particles constant, while providing a high level of fidelity and enabling straightforward load balancing. Our solution implements a 2D particle relaxation procedure that makes use of local surface metric tensors to model surface deformations. We combine this with an accurate bicubic surface representation to provide an artefact-free surface visualisation. We demonstrate and evaluate benefits of the proposed method with respect to surface accuracy and computational efficiency.
{"title":"Adaptive particle relaxation for time surfaces","authors":"A. Berres, H. Obermaier, K. Joy, H. Hagen","doi":"10.1109/PACIFICVIS.2015.7156371","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2015.7156371","url":null,"abstract":"Time surfaces are a versatile tool to visualise advection and deformation in flow fields. Due to complex flow behaviours involving stretching, shearing, and folding, straightforward mesh-based representations of these surfaces can develop artefacts and degenerate quickly. Common counter-measures rely on refinement and adaptive insertion of new particles which lead to an unpredictable increase in memory requirements. We propose a novel time surface extraction technique that keeps the number of required flow particles constant, while providing a high level of fidelity and enabling straightforward load balancing. Our solution implements a 2D particle relaxation procedure that makes use of local surface metric tensors to model surface deformations. We combine this with an accurate bicubic surface representation to provide an artefact-free surface visualisation. We demonstrate and evaluate benefits of the proposed method with respect to surface accuracy and computational efficiency.","PeriodicalId":177381,"journal":{"name":"2015 IEEE Pacific Visualization Symposium (PacificVis)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134072936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-14DOI: 10.1109/PACIFICVIS.2015.7156370
Keqin Wu, Song Zhang
This paper describes an effort to create new visualizations by exploiting hierarchical scalar topology. First, we build a hierarchical topology through synchronously constructing and simplifying Contour Tree (CT) and Morse-Smale (MS) complex of scalar fields. We then introduce three algorithms based on the hierarchical topology: (1) topology-based multi-resolution contouring - an overview provided for a scalar field by extracting iso-values from the simplified CT and tracing approximate contours across the MS complex cells; (2) topology based spaghetti plots for uncertainty - a seeding scheme based on the hierarchical topology for visualizing uncertainty among ensemble scalar data; (3) virtual ribbons - a new scheme for visualizing multivariate data invented by overlapping visual ribbons which encode the scalar variation of a region covered by uniform contours. We compare the new approaches with current alternatives.
{"title":"Visualizing 2D scalar fields with hierarchical topology","authors":"Keqin Wu, Song Zhang","doi":"10.1109/PACIFICVIS.2015.7156370","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2015.7156370","url":null,"abstract":"This paper describes an effort to create new visualizations by exploiting hierarchical scalar topology. First, we build a hierarchical topology through synchronously constructing and simplifying Contour Tree (CT) and Morse-Smale (MS) complex of scalar fields. We then introduce three algorithms based on the hierarchical topology: (1) topology-based multi-resolution contouring - an overview provided for a scalar field by extracting iso-values from the simplified CT and tracing approximate contours across the MS complex cells; (2) topology based spaghetti plots for uncertainty - a seeding scheme based on the hierarchical topology for visualizing uncertainty among ensemble scalar data; (3) virtual ribbons - a new scheme for visualizing multivariate data invented by overlapping visual ribbons which encode the scalar variation of a region covered by uniform contours. We compare the new approaches with current alternatives.","PeriodicalId":177381,"journal":{"name":"2015 IEEE Pacific Visualization Symposium (PacificVis)","volume":"266 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120959334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-14DOI: 10.1109/PACIFICVIS.2015.7156389
Kazuho Watanabe, Hsiang-Yun Wu, Yusuke Niibe, Shigeo Takahashi, I. Fujishiro
Exploring feature subspaces is one of promising approaches to analyzing and understanding the important patterns in multivariate data. If relying too much on effective enhancements in manual interventions, the associated results depend heavily on the knowledge and skills of users performing the data analysis. This paper presents a novel approach to extracting feature subspaces from multivariate data by incorporating biclustering techniques. The approach has been maximally automated in the sense that highly-correlated dimensions are automatically grouped to form subspaces, which effectively supports further exploration of them. A key idea behind our approach lies in a new mathematical formulation of asymmetric biclustering, by combining spherical k-means clustering for grouping highly-correlated dimensions, together with ordinary k-means clustering for identifying subsets of data samples. Lower-dimensional representations of data in feature subspaces are successfully visualized by parallel coordinate plot, where we project the data samples of correlated dimensions to one composite axis through dimensionality reduction schemes. Several experimental results of our data analysis together with discussions will be provided to assess the capability of our approach.
{"title":"Biclustering multivariate data for correlated subspace mining","authors":"Kazuho Watanabe, Hsiang-Yun Wu, Yusuke Niibe, Shigeo Takahashi, I. Fujishiro","doi":"10.1109/PACIFICVIS.2015.7156389","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2015.7156389","url":null,"abstract":"Exploring feature subspaces is one of promising approaches to analyzing and understanding the important patterns in multivariate data. If relying too much on effective enhancements in manual interventions, the associated results depend heavily on the knowledge and skills of users performing the data analysis. This paper presents a novel approach to extracting feature subspaces from multivariate data by incorporating biclustering techniques. The approach has been maximally automated in the sense that highly-correlated dimensions are automatically grouped to form subspaces, which effectively supports further exploration of them. A key idea behind our approach lies in a new mathematical formulation of asymmetric biclustering, by combining spherical k-means clustering for grouping highly-correlated dimensions, together with ordinary k-means clustering for identifying subsets of data samples. Lower-dimensional representations of data in feature subspaces are successfully visualized by parallel coordinate plot, where we project the data samples of correlated dimensions to one composite axis through dimensionality reduction schemes. Several experimental results of our data analysis together with discussions will be provided to assess the capability of our approach.","PeriodicalId":177381,"journal":{"name":"2015 IEEE Pacific Visualization Symposium (PacificVis)","volume":"405 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134474139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-14DOI: 10.1109/PACIFICVIS.2015.7156382
E. Sundén, T. Ropinski
Incorporating volumetric illumination into rendering of volumetric data increases visual realism, which can lead to improved spatial comprehension. It is known that spatial comprehension can be further improved by incorporating multiple light sources. However, many volumetric illumination algorithms have severe drawbacks when dealing with multiple light sources. These drawbacks are mainly high performance penalties and memory usage, which can be tackled with specialized data structures or data under sampling. In contrast, in this paper we present a method which enables volumetric illumination with multiple light sources without requiring precomputation or impacting visual quality. To achieve this goal, we introduce selective light updates which minimize the required computations when light settings are changed. We will discuss and analyze the novel concepts underlying selective light updates, and demonstrate them when applied to real-world data under different light settings.
{"title":"Efficient volume illumination with multiple light sources through selective light updates","authors":"E. Sundén, T. Ropinski","doi":"10.1109/PACIFICVIS.2015.7156382","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2015.7156382","url":null,"abstract":"Incorporating volumetric illumination into rendering of volumetric data increases visual realism, which can lead to improved spatial comprehension. It is known that spatial comprehension can be further improved by incorporating multiple light sources. However, many volumetric illumination algorithms have severe drawbacks when dealing with multiple light sources. These drawbacks are mainly high performance penalties and memory usage, which can be tackled with specialized data structures or data under sampling. In contrast, in this paper we present a method which enables volumetric illumination with multiple light sources without requiring precomputation or impacting visual quality. To achieve this goal, we introduce selective light updates which minimize the required computations when light settings are changed. We will discuss and analyze the novel concepts underlying selective light updates, and demonstrate them when applied to real-world data under different light settings.","PeriodicalId":177381,"journal":{"name":"2015 IEEE Pacific Visualization Symposium (PacificVis)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117015437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-14DOI: 10.1109/PACIFICVIS.2015.7156372
Junpeng Wang, Fei Yang, Yong Cao
Ray casting algorithm is a major component of the direct volume rendering, which exhibits inherent parallelism, making it suitable for graphics processing units (GPUs). However, blindly mapping the ray casting algorithm on a GPU's complex parallel architecture can result in a magnitude of performance loss. In this paper, a novel computation-to-core mapping strategy, called Warp Marching, for the texture-based iso-surface volume rendering is introduced. We evaluate and compare this new strategy with the most commonly used existing mapping strategy. Texture cache performance and load balancing are the two major evaluation factors since they have significant consequences on the overall rendering performance. Through a series of real-life data experiments, we conclude that the texture cache performances of these two computation-to-core mapping strategies are significantly affected by the viewing direction; and the Warp Marching performs better in balancing workloads among threads and concurrent hardware components of a GPU.
{"title":"Computation-to-core mapping strategies for iso-surface volume rendering on GPUs","authors":"Junpeng Wang, Fei Yang, Yong Cao","doi":"10.1109/PACIFICVIS.2015.7156372","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2015.7156372","url":null,"abstract":"Ray casting algorithm is a major component of the direct volume rendering, which exhibits inherent parallelism, making it suitable for graphics processing units (GPUs). However, blindly mapping the ray casting algorithm on a GPU's complex parallel architecture can result in a magnitude of performance loss. In this paper, a novel computation-to-core mapping strategy, called Warp Marching, for the texture-based iso-surface volume rendering is introduced. We evaluate and compare this new strategy with the most commonly used existing mapping strategy. Texture cache performance and load balancing are the two major evaluation factors since they have significant consequences on the overall rendering performance. Through a series of real-life data experiments, we conclude that the texture cache performances of these two computation-to-core mapping strategies are significantly affected by the viewing direction; and the Warp Marching performs better in balancing workloads among threads and concurrent hardware components of a GPU.","PeriodicalId":177381,"journal":{"name":"2015 IEEE Pacific Visualization Symposium (PacificVis)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129143325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-14DOI: 10.1109/PACIFICVIS.2015.7156375
Holger Stitz, S. Gratzl, M. T. Krieger, M. Streit
With the rise of virtualization and cloud-based networks of various scales and degrees of complexity, new approaches to managing such infrastructures are required. In these networks, relationships among components can be of arbitrary cardinality (1:1, 1:n, n:m), making it challenging for administrators to investigate which components influence others. In this paper we present CloudGazer, a scalable visualization system that allows users to monitor and optimize cloud-based networks effectively to reduce energy consumption and to increase the quality of service. Instead of visualizing the overall network, we split the graph into semantic perspectives that provide a much simpler view of the network. CloudGazer is a multiple coordinated view system that visualizes either static or live status information about the components of a perspective while reintroducing lost inter-perspective relationships on demand using dynamically created inlays. We demonstrate the effectiveness of CloudGazer in two usage scenarios: The first is based on a real-world network of our domain partners where static performance parameters are used to find an optimal design. In the second scenario we use the VAST 2013 Challenge dataset to demonstrate how the system can be employed with live streaming data.
{"title":"CloudGazer: A divide-and-conquer approach to monitoring and optimizing cloud-based networks","authors":"Holger Stitz, S. Gratzl, M. T. Krieger, M. Streit","doi":"10.1109/PACIFICVIS.2015.7156375","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2015.7156375","url":null,"abstract":"With the rise of virtualization and cloud-based networks of various scales and degrees of complexity, new approaches to managing such infrastructures are required. In these networks, relationships among components can be of arbitrary cardinality (1:1, 1:n, n:m), making it challenging for administrators to investigate which components influence others. In this paper we present CloudGazer, a scalable visualization system that allows users to monitor and optimize cloud-based networks effectively to reduce energy consumption and to increase the quality of service. Instead of visualizing the overall network, we split the graph into semantic perspectives that provide a much simpler view of the network. CloudGazer is a multiple coordinated view system that visualizes either static or live status information about the components of a perspective while reintroducing lost inter-perspective relationships on demand using dynamically created inlays. We demonstrate the effectiveness of CloudGazer in two usage scenarios: The first is based on a real-world network of our domain partners where static performance parameters are used to find an optimal design. In the second scenario we use the VAST 2013 Challenge dataset to demonstrate how the system can be employed with live streaming data.","PeriodicalId":177381,"journal":{"name":"2015 IEEE Pacific Visualization Symposium (PacificVis)","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127636098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-14DOI: 10.1109/PACIFICVIS.2015.7156353
R. Blanch, Rémy Dautriche, G. Bisson
Clustering is often a first step when trying to make sense of a large data set. A wide family of cluster analysis algorithms, namely hierarchical clustering algorithms, does not provide a partition of the data set but a hierarchy of clusters organized in a binary tree, known as a dendrogram. The dendrogram has a classical node-link representation used by experts for various tasks like: to decide which subtrees are actual clusters (e.g., by cutting the dendrogram at a given depth); to give those clusters a name by inspecting their content; etc. We present Dendrogramix, a hybrid tree-matrix interactive visualization of dendrograms that superimposes the relationship between individual objects on to the hierarchy of clusters. Dendrogramix enables users to do tasks which involve both clusters and individual objects that are impracticable with the classical representation, like: to explain why a particular objects belongs to a particular cluster; to elicit and understand uncommon patterns (e.g., objects that could have been classified in a totally different cluster); etc. Those sensemaking tasks are supported by a consistent set of interaction techniques that facilitates the exploration of large clustering results.
{"title":"Dendrogramix: A hybrid tree-matrix visualization technique to support interactive exploration of dendrograms","authors":"R. Blanch, Rémy Dautriche, G. Bisson","doi":"10.1109/PACIFICVIS.2015.7156353","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2015.7156353","url":null,"abstract":"Clustering is often a first step when trying to make sense of a large data set. A wide family of cluster analysis algorithms, namely hierarchical clustering algorithms, does not provide a partition of the data set but a hierarchy of clusters organized in a binary tree, known as a dendrogram. The dendrogram has a classical node-link representation used by experts for various tasks like: to decide which subtrees are actual clusters (e.g., by cutting the dendrogram at a given depth); to give those clusters a name by inspecting their content; etc. We present Dendrogramix, a hybrid tree-matrix interactive visualization of dendrograms that superimposes the relationship between individual objects on to the hierarchy of clusters. Dendrogramix enables users to do tasks which involve both clusters and individual objects that are impracticable with the classical representation, like: to explain why a particular objects belongs to a particular cluster; to elicit and understand uncommon patterns (e.g., objects that could have been classified in a totally different cluster); etc. Those sensemaking tasks are supported by a consistent set of interaction techniques that facilitates the exploration of large clustering results.","PeriodicalId":177381,"journal":{"name":"2015 IEEE Pacific Visualization Symposium (PacificVis)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116354829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}