Kengo Hayashi, Takashi Shimizu, Naohisa Sakamoto, J. Nonaka
Numerical simulation results generated from high performance computing (HPC) environments have become extremely concurrent with the recent advances in computer simulation technology, and there is an increase in the demand for extra-scale visualization techniques. In this paper, we propose a parallel particle-based volume rendering method based on adaptive particle size adjustment technique, which is suitable for handling large-scale and complex distributed volume datasets in the HPC environment. In the experiment, the proposed technique is applied to a large-scale unstructured thermal fluid simulation, and a performance model is constructed to confirm the effectiveness of the proposed technique.
{"title":"Parallel particle-based volume rendering using adaptive particle size adjustment technique","authors":"Kengo Hayashi, Takashi Shimizu, Naohisa Sakamoto, J. Nonaka","doi":"10.1145/3139295.3139311","DOIUrl":"https://doi.org/10.1145/3139295.3139311","url":null,"abstract":"Numerical simulation results generated from high performance computing (HPC) environments have become extremely concurrent with the recent advances in computer simulation technology, and there is an increase in the demand for extra-scale visualization techniques. In this paper, we propose a parallel particle-based volume rendering method based on adaptive particle size adjustment technique, which is suitable for handling large-scale and complex distributed volume datasets in the HPC environment. In the experiment, the proposed technique is applied to a large-scale unstructured thermal fluid simulation, and a performance model is constructed to confirm the effectiveness of the proposed technique.","PeriodicalId":92446,"journal":{"name":"SIGGRAPH Asia 2017 Symposium on Visualization. SIGGRAPH Asia Symposium on Visualization (2017 : Bangkok, Thailand)","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77835667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eye movement data has a spatio-temporal nature which makes the design of suitable visualization techniques a challenging task. Moreover, eye movement data is typically recorded by tracking the eyes of various study participants in order to achieve significant results about applied visual task solution strategies. If we have to deal with vast amounts of eye movement data, a data preprocessing in form of data mining is useful since it can be applied to compute a set of rules. Those aggregate, filter, and hence reduce the original data to derive patterns in it. The generated rule sets are still large enough to serve as input data for a visual analytics system. In this paper we describe a visual analysis model for eye movement data combining data mining and visualization with the goal to get an impression about point-of-interest (POI) and area-of-interest (AOI) correlations in eye movement data on different levels of spatial and temporal granularities. Those correlations can support a data analyst to derive visual patterns that can be mapped to data patterns, i.e., visual scanning strategies with different probabilities of a group of eye tracked people. We show the usefulness of our data mining and visualization system by applying it to datasets recorded in a formerly conducted eye tracking experiment investigating the readability of metro maps.
{"title":"Mining and visualizing eye movement data","authors":"Michael Burch","doi":"10.1145/3139295.3139304","DOIUrl":"https://doi.org/10.1145/3139295.3139304","url":null,"abstract":"Eye movement data has a spatio-temporal nature which makes the design of suitable visualization techniques a challenging task. Moreover, eye movement data is typically recorded by tracking the eyes of various study participants in order to achieve significant results about applied visual task solution strategies. If we have to deal with vast amounts of eye movement data, a data preprocessing in form of data mining is useful since it can be applied to compute a set of rules. Those aggregate, filter, and hence reduce the original data to derive patterns in it. The generated rule sets are still large enough to serve as input data for a visual analytics system. In this paper we describe a visual analysis model for eye movement data combining data mining and visualization with the goal to get an impression about point-of-interest (POI) and area-of-interest (AOI) correlations in eye movement data on different levels of spatial and temporal granularities. Those correlations can support a data analyst to derive visual patterns that can be mapped to data patterns, i.e., visual scanning strategies with different probabilities of a group of eye tracked people. We show the usefulness of our data mining and visualization system by applying it to datasets recorded in a formerly conducted eye tracking experiment investigating the readability of metro maps.","PeriodicalId":92446,"journal":{"name":"SIGGRAPH Asia 2017 Symposium on Visualization. SIGGRAPH Asia Symposium on Visualization (2017 : Bangkok, Thailand)","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84372601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Botong Qu, Prashant Kumar, E. Zhang, P. Jaiswal, L. Cooper, J. Elser, Yue Zhang
Graph and network visualization is a well-researched area. However, graphs are limited in that by definition they are designed to encode pairwise relationships between the nodes in the graph. In this paper, we strive for visualization of datasets that contain not only binary relationships between the nodes, but also higher-cardinality relationships (ternary, quaternary, quinary, senary, etc). While such higher-cardinality relationships can be treated as cliques (a complete graph of N nodes), visualization of cliques using graph visualization can lead to unnecessary visual cluttering due to all the pairwise edges inside each clique. In this paper, we develop a visualization for data that have relationships with cardinalities higher than two. By representing each N-ary relationship as an N-sided polygon, we turn the problem of visualizing such data sets into that of visualizing a two-dimensional complex, i.e. nodes, edges, and polygonal faces. This greatly reduces the number of edges needed to represent a clique and makes them as well as their cardinalities more easily recognized. We develop a set of principles that measures the effectiveness of the visualization for two-dimensional complexes. Furthermore, we formulate our strategy with which the positions of the nodes in the complex and the orderings of the nodes inside each clique in the complex can be optimized. Furthermore, we allow the user to further improve the layout by moving a node or a polygon in 3D as well as changing the order of the nodes in a polygon. To demonstrate the effectiveness of our technique and system, we apply them to a social network and a gene dataset.
{"title":"Interactive design and visualization of N-ary relationships","authors":"Botong Qu, Prashant Kumar, E. Zhang, P. Jaiswal, L. Cooper, J. Elser, Yue Zhang","doi":"10.1145/3139295.3139314","DOIUrl":"https://doi.org/10.1145/3139295.3139314","url":null,"abstract":"Graph and network visualization is a well-researched area. However, graphs are limited in that by definition they are designed to encode pairwise relationships between the nodes in the graph. In this paper, we strive for visualization of datasets that contain not only binary relationships between the nodes, but also higher-cardinality relationships (ternary, quaternary, quinary, senary, etc). While such higher-cardinality relationships can be treated as cliques (a complete graph of N nodes), visualization of cliques using graph visualization can lead to unnecessary visual cluttering due to all the pairwise edges inside each clique. In this paper, we develop a visualization for data that have relationships with cardinalities higher than two. By representing each N-ary relationship as an N-sided polygon, we turn the problem of visualizing such data sets into that of visualizing a two-dimensional complex, i.e. nodes, edges, and polygonal faces. This greatly reduces the number of edges needed to represent a clique and makes them as well as their cardinalities more easily recognized. We develop a set of principles that measures the effectiveness of the visualization for two-dimensional complexes. Furthermore, we formulate our strategy with which the positions of the nodes in the complex and the orderings of the nodes inside each clique in the complex can be optimized. Furthermore, we allow the user to further improve the layout by moving a node or a polygon in 3D as well as changing the order of the nodes in a polygon. To demonstrate the effectiveness of our technique and system, we apply them to a social network and a gene dataset.","PeriodicalId":92446,"journal":{"name":"SIGGRAPH Asia 2017 Symposium on Visualization. SIGGRAPH Asia Symposium on Visualization (2017 : Bangkok, Thailand)","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78847757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adaptive Mesh Refinement (AMR) methods are widespread in scientific computing, and visualizing the resulting data with efficient and accurate rendering methods can be vital for enabling interactive data exploration. In this work, we detail a comprehensive solution for directly volume rendering block-structured (Berger-Colella) AMR data in the OSPRay interactive CPU ray tracing framework. In particular, we contribute a general method for representing and traversing AMR data using a kd-tree structure, and four different reconstruction options, one of which in particular (the basis function approach) is novel compared to existing methods. We demonstrate our system on two types of block-structured AMR data and compressed scalar field data, and show how it can be easily used in existing production-ready applications through a prototypical integration in the widely used visualization program ParaView.
{"title":"CPU volume rendering of adaptive mesh refinement data","authors":"I. Wald, Carson Brownlee, W. Usher, A. Knoll","doi":"10.1145/3139295.3139305","DOIUrl":"https://doi.org/10.1145/3139295.3139305","url":null,"abstract":"Adaptive Mesh Refinement (AMR) methods are widespread in scientific computing, and visualizing the resulting data with efficient and accurate rendering methods can be vital for enabling interactive data exploration. In this work, we detail a comprehensive solution for directly volume rendering block-structured (Berger-Colella) AMR data in the OSPRay interactive CPU ray tracing framework. In particular, we contribute a general method for representing and traversing AMR data using a kd-tree structure, and four different reconstruction options, one of which in particular (the basis function approach) is novel compared to existing methods. We demonstrate our system on two types of block-structured AMR data and compressed scalar field data, and show how it can be easily used in existing production-ready applications through a prototypical integration in the widely used visualization program ParaView.","PeriodicalId":92446,"journal":{"name":"SIGGRAPH Asia 2017 Symposium on Visualization. SIGGRAPH Asia Symposium on Visualization (2017 : Bangkok, Thailand)","volume":"60 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90880305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In seismic research, a hypothesis is that ionosphere disturbances are related to lithosphere activities such as earthquakes. Domain scientists are urgent to discover disturbance patterns of electromagnetic attributes in ionosphere around earthquakes, and to propose related hypotheses. However, the workflow of seismic researchers usually only supports pattern extraction from a few earthquakes. To explore the pattern-based hypotheses on a large spatiotemporal scale meets challenges, due to the limitation of their analysis tools. To tackle the problem, we develop a visual analytics system which not only supports pattern extraction of the original workflow in a way of dynamic query, but also extends the work with hypotheses exploration on a global scale. Domain scientists can easily utilize our system to explore the heterogeneous dataset, and to extract patterns and explore related hypotheses visually and interactively. We conduct several case studies to demonstrate the usage and effectiveness of our system in the research of relationships between ionosphere disturbances and earthquakes.
{"title":"Visual exploration of ionosphere disturbances for earthquake research","authors":"Fan Hong, Siming Chen, Hanqi Guo, Xiaoru Yuan, Jian Huang, Yongxian Zhang","doi":"10.1145/3139295.3139301","DOIUrl":"https://doi.org/10.1145/3139295.3139301","url":null,"abstract":"In seismic research, a hypothesis is that ionosphere disturbances are related to lithosphere activities such as earthquakes. Domain scientists are urgent to discover disturbance patterns of electromagnetic attributes in ionosphere around earthquakes, and to propose related hypotheses. However, the workflow of seismic researchers usually only supports pattern extraction from a few earthquakes. To explore the pattern-based hypotheses on a large spatiotemporal scale meets challenges, due to the limitation of their analysis tools. To tackle the problem, we develop a visual analytics system which not only supports pattern extraction of the original workflow in a way of dynamic query, but also extends the work with hypotheses exploration on a global scale. Domain scientists can easily utilize our system to explore the heterogeneous dataset, and to extract patterns and explore related hypotheses visually and interactively. We conduct several case studies to demonstrate the usage and effectiveness of our system in the research of relationships between ionosphere disturbances and earthquakes.","PeriodicalId":92446,"journal":{"name":"SIGGRAPH Asia 2017 Symposium on Visualization. SIGGRAPH Asia Symposium on Visualization (2017 : Bangkok, Thailand)","volume":"73 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86358197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Schulz, Nils Rodrigues, Krishna Damarla, Andreas Henicke, D. Weiskopf
We present a visual analytics approach to support the workload management process for z/OS mainframes at IBM. This process typically requires the analysis of records consisting of 100 to 150 performance-related metrics, sampled over time. We aim at replacing the previous spreadsheet-based workflow with an easier, faster, and more scalable one regarding measurement periods and collected performance metrics. To achieve this goal, we collaborate with a developer embedded at IBM in a formative process. Based on that experience, we discuss the application background and formulate requirements to support decision making based on performance data for large-scale systems. Our visual approach helps analysts find outliers, patterns, and relations between performance metrics by data exploration through various visualizations. We demonstrate the usefulness and applicability of line plots, scatter plots, scatter plot matrices, parallel coordinates, and correlation matrices for workload management. Finally, we evaluate our approach in a qualitative user study with IBM domain experts.
{"title":"Visual exploration of mainframe workloads","authors":"C. Schulz, Nils Rodrigues, Krishna Damarla, Andreas Henicke, D. Weiskopf","doi":"10.1145/3139295.3139312","DOIUrl":"https://doi.org/10.1145/3139295.3139312","url":null,"abstract":"We present a visual analytics approach to support the workload management process for z/OS mainframes at IBM. This process typically requires the analysis of records consisting of 100 to 150 performance-related metrics, sampled over time. We aim at replacing the previous spreadsheet-based workflow with an easier, faster, and more scalable one regarding measurement periods and collected performance metrics. To achieve this goal, we collaborate with a developer embedded at IBM in a formative process. Based on that experience, we discuss the application background and formulate requirements to support decision making based on performance data for large-scale systems. Our visual approach helps analysts find outliers, patterns, and relations between performance metrics by data exploration through various visualizations. We demonstrate the usefulness and applicability of line plots, scatter plots, scatter plot matrices, parallel coordinates, and correlation matrices for workload management. Finally, we evaluate our approach in a qualitative user study with IBM domain experts.","PeriodicalId":92446,"journal":{"name":"SIGGRAPH Asia 2017 Symposium on Visualization. SIGGRAPH Asia Symposium on Visualization (2017 : Bangkok, Thailand)","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85576049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sales Performance Management (SPM) solutions for enterprise-grade businesses generate large volumes of multi-dimensional data, including temporal event sequences and tabular attributes. Among attributes of different data structures, it is difficult to find clear connections between factors and outcomes. Discovering key factors and their influences from multivariate data can provide instructional advice to help sales representatives (SR) maintain healthy relationships with customers and achieve sales goals. This paper describes the FactorLink approach for 1) correlating temporal event sequences, multi-dimensional tabular data with their outcomes, 2) interactively assisting users to find key factors and understand their influences, and 3) exploring potential outcomes by reviewing and comparing the patterns found in the integrated SPM data. We conducted several case studies, and the results demonstrate the effectiveness of our approach.
{"title":"Factorlink: a visual analysis tool for sales performance management","authors":"Chuan Wang, Takeshi Onishi, K. Ma","doi":"10.1145/3139295.3139300","DOIUrl":"https://doi.org/10.1145/3139295.3139300","url":null,"abstract":"Sales Performance Management (SPM) solutions for enterprise-grade businesses generate large volumes of multi-dimensional data, including temporal event sequences and tabular attributes. Among attributes of different data structures, it is difficult to find clear connections between factors and outcomes. Discovering key factors and their influences from multivariate data can provide instructional advice to help sales representatives (SR) maintain healthy relationships with customers and achieve sales goals. This paper describes the FactorLink approach for 1) correlating temporal event sequences, multi-dimensional tabular data with their outcomes, 2) interactively assisting users to find key factors and understand their influences, and 3) exploring potential outcomes by reviewing and comparing the patterns found in the integrated SPM data. We conducted several case studies, and the results demonstrate the effectiveness of our approach.","PeriodicalId":92446,"journal":{"name":"SIGGRAPH Asia 2017 Symposium on Visualization. SIGGRAPH Asia Symposium on Visualization (2017 : Bangkok, Thailand)","volume":"240 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80459792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present novel transfer functions that advance the classification of volume data by combining the advantages of the existing boundary-based and structure-based methods. We introduce the usage of the standard deviation of ambient occlusion to quantify the variation of both boundary and structure information across voxels, and name our method as boundary-structure-aware transfer functions. Our method gives concrete guidelines to better reveal the interior and exterior structures of features, especially for occluded objects without perfect homogeneous intensities. Furthermore, our method separates these patterns from other materials that may contain similar average intensities, but with different intensity variations. The proposed method extends the expressiveness and the utility of volume rendering in extracting the continuously changed patterns and achieving more robust volume classifications.
{"title":"Boundary-structure-aware transfer functions for volume classification","authors":"Lina Yu, Hongfeng Yu","doi":"10.1145/3139295.3139306","DOIUrl":"https://doi.org/10.1145/3139295.3139306","url":null,"abstract":"We present novel transfer functions that advance the classification of volume data by combining the advantages of the existing boundary-based and structure-based methods. We introduce the usage of the standard deviation of ambient occlusion to quantify the variation of both boundary and structure information across voxels, and name our method as boundary-structure-aware transfer functions. Our method gives concrete guidelines to better reveal the interior and exterior structures of features, especially for occluded objects without perfect homogeneous intensities. Furthermore, our method separates these patterns from other materials that may contain similar average intensities, but with different intensity variations. The proposed method extends the expressiveness and the utility of volume rendering in extracting the continuously changed patterns and achieving more robust volume classifications.","PeriodicalId":92446,"journal":{"name":"SIGGRAPH Asia 2017 Symposium on Visualization. SIGGRAPH Asia Symposium on Visualization (2017 : Bangkok, Thailand)","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89149341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Nonaka, Naohisa Sakamoto, Y. Maejima, K. Ono, K. Koyamada
Extreme weather events, such as unexpected and sudden torrential rains, have received increasing attention by the specialists as well as ordinary people due to the possibility of causing severe material damages and human losses. Computational climate scientists have been working on high-resolution time-varying, multivariate numerical simulations of this kind of short-term event, which is still hard to predict. Local governments of the natural disaster prone countries, like Japan, usually possess disaster management sectors, responsible for storing the disaster related data and analysis results. In this paper, we present a visualization framework for enabling the interactive exploration of the causality, such as of the disasters and the related extreme weather events. The end users will be able to identify the spatio-temporal regions where there is a strong strength of cause-effect relationships. As a case study, we studied the unexpected torrential rain occurred in the city of Kobe, in 2008, where a flash flood, in the urban area, caused some human losses. We utilized high-resolution computational climate simulation results executed on a supercomputer, and the measured river level data obtained from the Civil Engineering Office of Kobe City. We expected that this kind of tool can assist the specialists for better understanding the cause-effect relationships between the extreme weather and the related disasters, as well as, the local government policy makers in the adaptation policies for the disaster risk reductions.
{"title":"A visual causal exploration framework case study: a torrential rain and a flash flood in Kobe city","authors":"J. Nonaka, Naohisa Sakamoto, Y. Maejima, K. Ono, K. Koyamada","doi":"10.1145/3139295.3139313","DOIUrl":"https://doi.org/10.1145/3139295.3139313","url":null,"abstract":"Extreme weather events, such as unexpected and sudden torrential rains, have received increasing attention by the specialists as well as ordinary people due to the possibility of causing severe material damages and human losses. Computational climate scientists have been working on high-resolution time-varying, multivariate numerical simulations of this kind of short-term event, which is still hard to predict. Local governments of the natural disaster prone countries, like Japan, usually possess disaster management sectors, responsible for storing the disaster related data and analysis results. In this paper, we present a visualization framework for enabling the interactive exploration of the causality, such as of the disasters and the related extreme weather events. The end users will be able to identify the spatio-temporal regions where there is a strong strength of cause-effect relationships. As a case study, we studied the unexpected torrential rain occurred in the city of Kobe, in 2008, where a flash flood, in the urban area, caused some human losses. We utilized high-resolution computational climate simulation results executed on a supercomputer, and the measured river level data obtained from the Civil Engineering Office of Kobe City. We expected that this kind of tool can assist the specialists for better understanding the cause-effect relationships between the extreme weather and the related disasters, as well as, the local government policy makers in the adaptation policies for the disaster risk reductions.","PeriodicalId":92446,"journal":{"name":"SIGGRAPH Asia 2017 Symposium on Visualization. SIGGRAPH Asia Symposium on Visualization (2017 : Bangkok, Thailand)","volume":"107 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77159283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artists and designers often use examples to find inspirational ideas for using colors. While growing public art repositories provide more examples to choose from, understanding the color use in such large artwork collections can be challenging. In this paper, we present a novel technique for summarizing the color use in large artwork collections. Our technique is based on a novel representation, probabilistic color palettes, which can intuitively summarize the contextual and stylistic use of colors in a collection of artworks. Unlike traditional color palettes that only encapsulate what colors are used using a compact set of representative colors, probabilistic color palettes encode the knowledge of how the colors are used in terms of frequencies, positions, and sizes, using an intuitive set of probability distributions. Given a collection of artworks organized by artist, we learn the probabilistic color palettes using a probabilistic colorization model, which describes the colorization process in a probabilistic framework and considers the impact of both spatial and semantic factors upon the colorization process. The learned probabilistic color palettes allows users to quickly understand the color use within the collection. We present results on a large collection of artworks by different artists, and evaluate the effectiveness of our probabilistic color palettes in a user study.
{"title":"Mining probabilistic color palettes for summarizing color use in artwork collections","authors":"Ying Cao, Antoni B. Chan, Rynson W. H. Lau","doi":"10.1145/3139295.3139296","DOIUrl":"https://doi.org/10.1145/3139295.3139296","url":null,"abstract":"Artists and designers often use examples to find inspirational ideas for using colors. While growing public art repositories provide more examples to choose from, understanding the color use in such large artwork collections can be challenging. In this paper, we present a novel technique for summarizing the color use in large artwork collections. Our technique is based on a novel representation, probabilistic color palettes, which can intuitively summarize the contextual and stylistic use of colors in a collection of artworks. Unlike traditional color palettes that only encapsulate what colors are used using a compact set of representative colors, probabilistic color palettes encode the knowledge of how the colors are used in terms of frequencies, positions, and sizes, using an intuitive set of probability distributions. Given a collection of artworks organized by artist, we learn the probabilistic color palettes using a probabilistic colorization model, which describes the colorization process in a probabilistic framework and considers the impact of both spatial and semantic factors upon the colorization process. The learned probabilistic color palettes allows users to quickly understand the color use within the collection. We present results on a large collection of artworks by different artists, and evaluate the effectiveness of our probabilistic color palettes in a user study.","PeriodicalId":92446,"journal":{"name":"SIGGRAPH Asia 2017 Symposium on Visualization. SIGGRAPH Asia Symposium on Visualization (2017 : Bangkok, Thailand)","volume":"90 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82142726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}