Pub Date : 2003-10-22DOI: 10.1109/VISUAL.2003.1250416
D. Gresh, E. I. Kelton
We describe a visualization application intended for operational use in formulating business strategy in the customer service arena. The visualization capability provided in this application implicitly allows the user to better formulate the objective function for large optimization runs which act to minimize costs based on certain input parameters. Visualization is necessary because many of the inputs to the optimization runs are themselves strategic business decisions which are not pre-ordained. Both information visualization presentations and three-dimensional visualizations are included to help users better understand the cost/benefit tradeoffs of these strategic business decisions. Here, visualization explicitly provides value not possible algorithmically, as the perceived benefit of different combinations of service level does not have an a priori mathematical formulation. Thus, we take advantage of the fundamental power of visualization, bringing the user's intuition and pattern recognition skills into the solution, while simultaneously taking advantage of the strength of algorithmic approaches to quickly and accurately find an optimal solution to a well-defined problem.
{"title":"Visualization, optimization, business strategy: a case study","authors":"D. Gresh, E. I. Kelton","doi":"10.1109/VISUAL.2003.1250416","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250416","url":null,"abstract":"We describe a visualization application intended for operational use in formulating business strategy in the customer service arena. The visualization capability provided in this application implicitly allows the user to better formulate the objective function for large optimization runs which act to minimize costs based on certain input parameters. Visualization is necessary because many of the inputs to the optimization runs are themselves strategic business decisions which are not pre-ordained. Both information visualization presentations and three-dimensional visualizations are included to help users better understand the cost/benefit tradeoffs of these strategic business decisions. Here, visualization explicitly provides value not possible algorithmically, as the perceived benefit of different combinations of service level does not have an a priori mathematical formulation. Thus, we take advantage of the fundamental power of visualization, bringing the user's intuition and pattern recognition skills into the solution, while simultaneously taking advantage of the strength of algorithmic approaches to quickly and accurately find an optimal solution to a well-defined problem.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124081082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-22DOI: 10.1109/VISUAL.2003.1250372
J. Ebling, G. Scheuermann
The goal of this paper is to define a convolution operation which transfers image processing and pattern matching to vector fields from flow visualization. For this, a multiplication of vectors is necessary. Clifford algebra provides such a multiplication of vectors. We define a Clifford convolution on vector fields with uniform grids. The Clifford convolution works with multivector filter masks. Scalar and vector masks can be easily converted to multivector fields. So, filter masks from image processing on scalar fields can be applied as well as vector and scalar masks. Furthermore, a method for pattern matching with Clifford convolution on vector fields is described. The method is independent of the direction of the structures. This provides an automatic approach to feature detection. The features can be visualized using any known method like glyphs, isosurfaces or streamlines. The features are defined by filter masks instead of analytical properties and thus the approach is more intuitive.
{"title":"Clifford convolution and pattern matching on vector fields","authors":"J. Ebling, G. Scheuermann","doi":"10.1109/VISUAL.2003.1250372","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250372","url":null,"abstract":"The goal of this paper is to define a convolution operation which transfers image processing and pattern matching to vector fields from flow visualization. For this, a multiplication of vectors is necessary. Clifford algebra provides such a multiplication of vectors. We define a Clifford convolution on vector fields with uniform grids. The Clifford convolution works with multivector filter masks. Scalar and vector masks can be easily converted to multivector fields. So, filter masks from image processing on scalar fields can be applied as well as vector and scalar masks. Furthermore, a method for pattern matching with Clifford convolution on vector fields is described. The method is independent of the direction of the structures. This provides an automatic approach to feature detection. The features can be visualized using any known method like glyphs, isosurfaces or streamlines. The features are defined by filter masks instead of analytical properties and thus the approach is more intuitive.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128445634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-22DOI: 10.1109/VISUAL.2003.1250420
D. Ellsworth, P. Moran
We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical in Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases, we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve analysis speedups ranging from 1.5 to 2.
{"title":"Accelerating large data analysis by exploiting regularities","authors":"D. Ellsworth, P. Moran","doi":"10.1109/VISUAL.2003.1250420","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250420","url":null,"abstract":"We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical in Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases, we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve analysis speedups ranging from 1.5 to 2.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116342087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-22DOI: 10.1109/VISUAL.2003.1250351
Laurent Saroul, Sebastian Gerlach, R. Hersch
The extraction of planar sections from volume images is the most commonly used technique for inspecting and visualizing anatomic structures. We propose to generalize the concept of planar section to the extraction of curved cross-sections (free form surfaces). Compared with planar slices, curved cross-sections may easily follow the trajectory of tubular structures and organs such as the aorta or the colon. They may be extracted from a 3D volume, displayed as a 3D view and possibly flattened. Flattening of curved cross-sections allows to inspect spatially complex relationship between anatomic structures and their neighborhood. They also allow to carry out measurements along a specific orientation. For the purpose of facilitating the interactive specification of free form surfaces, users may navigate in real time within the body and select the slices on which the surface control points will be positioned. Immediate feedback is provided by displaying boundary curves as cylindrical markers within a 3D view composed of anatomic organs, planar slices and possibly free form surface sections. Extraction of curved surface sections is an additional service that is available online as a Java applet (http://visiblehuman.epfl.ch). It may be used as an advanced tool for exploring and teaching anatomy.
{"title":"Exploring curved anatomic structures with surface sections","authors":"Laurent Saroul, Sebastian Gerlach, R. Hersch","doi":"10.1109/VISUAL.2003.1250351","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250351","url":null,"abstract":"The extraction of planar sections from volume images is the most commonly used technique for inspecting and visualizing anatomic structures. We propose to generalize the concept of planar section to the extraction of curved cross-sections (free form surfaces). Compared with planar slices, curved cross-sections may easily follow the trajectory of tubular structures and organs such as the aorta or the colon. They may be extracted from a 3D volume, displayed as a 3D view and possibly flattened. Flattening of curved cross-sections allows to inspect spatially complex relationship between anatomic structures and their neighborhood. They also allow to carry out measurements along a specific orientation. For the purpose of facilitating the interactive specification of free form surfaces, users may navigate in real time within the body and select the slices on which the surface control points will be positioned. Immediate feedback is provided by displaying boundary curves as cylindrical markers within a 3D view composed of anatomic organs, planar slices and possibly free form surface sections. Extraction of curved surface sections is an additional service that is available online as a Java applet (http://visiblehuman.epfl.ch). It may be used as an advanced tool for exploring and teaching anatomy.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116763633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-22DOI: 10.1109/VISUAL.2003.1250414
G. Kindlmann, R. Whitaker, T. Tasdizen, Torsten Möller
Direct volume rendering of scalar fields uses a transfer function to map locally measured data properties to opacities and colors. The domain of the transfer function is typically the one-dimensional space of scalar data values. This paper advances the use of curvature information in multi-dimensional transfer functions, with a methodology for computing high-quality curvature measurements. The proposed methodology combines an implicit formulation of curvature with convolution-based reconstruction of the field. We give concrete guidelines for implementing the methodology, and illustrate the importance of choosing accurate filters for computing derivatives with convolution. Curvature-based transfer functions are shown to extend the expressivity and utility of volume rendering through contributions in three different application areas: nonphotorealistic volume rendering, surface smoothing via anisotropic diffusion, and visualization of isosurface uncertainty.
{"title":"Curvature-based transfer functions for direct volume rendering: methods and applications","authors":"G. Kindlmann, R. Whitaker, T. Tasdizen, Torsten Möller","doi":"10.1109/VISUAL.2003.1250414","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250414","url":null,"abstract":"Direct volume rendering of scalar fields uses a transfer function to map locally measured data properties to opacities and colors. The domain of the transfer function is typically the one-dimensional space of scalar data values. This paper advances the use of curvature information in multi-dimensional transfer functions, with a methodology for computing high-quality curvature measurements. The proposed methodology combines an implicit formulation of curvature with convolution-based reconstruction of the field. We give concrete guidelines for implementing the methodology, and illustrate the importance of choosing accurate filters for computing derivatives with convolution. Curvature-based transfer functions are shown to extend the expressivity and utility of volume rendering through contributions in three different application areas: nonphotorealistic volume rendering, surface smoothing via anisotropic diffusion, and visualization of isosurface uncertainty.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127244732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-22DOI: 10.1109/VISUAL.2003.1250424
W. Plesniak, M. Halle, S. Pieper, W. Wells, M. Jakab, D. Meier, S. Benton, C. Guttmann, R. Kikinis
We describe an animated electro-holographic visualization of brain lesions due to the progression of multiple sclerosis. A research case study is used which documents the expression of visible brain lesions in a series of magnetic resonance imaging (MRI) volumes collected over the interval of one year. Some of the salient information resident within this data is described, and the motivation for using a dynamic spatial display to explore its spatial and temporal characteristics is stated. We provide a brief overview of spatial displays in medical imaging applications, and then describe our experimental visualization pipeline, from the processing of MRI datasets, through model construction, computer graphic rendering, and hologram encoding. The utility, strengths and shortcomings of the electro-holographic visualization are described and future improvements are suggested.
{"title":"Holographic video display of time-series volumetric medical data","authors":"W. Plesniak, M. Halle, S. Pieper, W. Wells, M. Jakab, D. Meier, S. Benton, C. Guttmann, R. Kikinis","doi":"10.1109/VISUAL.2003.1250424","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250424","url":null,"abstract":"We describe an animated electro-holographic visualization of brain lesions due to the progression of multiple sclerosis. A research case study is used which documents the expression of visible brain lesions in a series of magnetic resonance imaging (MRI) volumes collected over the interval of one year. Some of the salient information resident within this data is described, and the motivation for using a dynamic spatial display to explore its spatial and temporal characteristics is stated. We provide a brief overview of spatial displays in medical imaging applications, and then describe our experimental visualization pipeline, from the processing of MRI datasets, through model construction, computer graphic rendering, and hologram encoding. The utility, strengths and shortcomings of the electro-holographic visualization are described and future improvements are suggested.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125591390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-22DOI: 10.1109/VISUAL.2003.1250402
J. Woodring, Chaoli Wang, Han-Wei Shen
We present an alternative method for viewing time-varying volumetric data. We consider such data as a four-dimensional data field, rather than considering space and time as separate entities. If we treat the data in this manner, we can apply high dimensional slicing and projection techniques to generate an image hyperplane. The user is provided with an intuitive user interface to specify arbitrary hyperplanes in 4D, which can be displayed with standard volume rendering techniques. From the volume specification, we are able to extract arbitrary hyperslices, combine slices together into a hyperprojection volume, or apply a 4D raycasting method to generate the same results. In combination with appropriate integration operators and transfer functions, we are able to extract and present different space-time features to the user.
{"title":"High dimensional direct rendering of time-varying volumetric data","authors":"J. Woodring, Chaoli Wang, Han-Wei Shen","doi":"10.1109/VISUAL.2003.1250402","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250402","url":null,"abstract":"We present an alternative method for viewing time-varying volumetric data. We consider such data as a four-dimensional data field, rather than considering space and time as separate entities. If we treat the data in this manner, we can apply high dimensional slicing and projection techniques to generate an image hyperplane. The user is provided with an intuitive user interface to specify arbitrary hyperplanes in 4D, which can be displayed with standard volume rendering techniques. From the volume specification, we are able to extract arbitrary hyperslices, combine slices together into a hyperprojection volume, or apply a 4D raycasting method to generate the same results. In combination with appropriate integration operators and transfer functions, we are able to extract and present different space-time features to the user.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132370775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-22DOI: 10.1109/VISUAL.2003.1250358
C. Sigg, R. Peikert, M. Gross
This paper presents a signed distance transform algorithm using graphics hardware, which computes the scalar valued function of the Euclidean distance to a given manifold of co-dimension one. If the manifold is closed and orientable, the distance has a negative sign on one side of the manifold and a positive sign on the other. Triangle meshes are considered for the representation of a two-dimensional manifold and the distance function is sampled on a regular Cartesian grid. In order to achieve linear complexity in the number of grid points, to each primitive we assign a simple polyhedron enclosing its Voronoi cell. Voronoi cells are known to contain exactly all points that lay closest to its corresponding primitive. Thus, the distance to the primitive only has to be computed for grid points inside its polyhedron. Although Voronoi cells partition space, the polyhedrons enclosing these cells do overlap. In regions where these overlaps occur, the minimum of all computed distances is assigned to a grid point. In order to speed up computations, points inside each polyhedron are determined by scan conversion of grid slices using graphics hardware. For this task, a fragment program is used to perform the nonlinear interpolation and minimization of distance values.
{"title":"Signed distance transform using graphics hardware","authors":"C. Sigg, R. Peikert, M. Gross","doi":"10.1109/VISUAL.2003.1250358","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250358","url":null,"abstract":"This paper presents a signed distance transform algorithm using graphics hardware, which computes the scalar valued function of the Euclidean distance to a given manifold of co-dimension one. If the manifold is closed and orientable, the distance has a negative sign on one side of the manifold and a positive sign on the other. Triangle meshes are considered for the representation of a two-dimensional manifold and the distance function is sampled on a regular Cartesian grid. In order to achieve linear complexity in the number of grid points, to each primitive we assign a simple polyhedron enclosing its Voronoi cell. Voronoi cells are known to contain exactly all points that lay closest to its corresponding primitive. Thus, the distance to the primitive only has to be computed for grid points inside its polyhedron. Although Voronoi cells partition space, the polyhedrons enclosing these cells do overlap. In regions where these overlaps occur, the minimum of all computed distances is assigned to a grid point. In order to speed up computations, points inside each polyhedron are determined by scan conversion of grid slices using graphics hardware. For this task, a fragment program is used to perform the nonlinear interpolation and minimization of distance values.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115116545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-22DOI: 10.1109/VISUAL.2003.1250364
R. Laramee, B. Jobard, H. Hauser
We present a technique for direct visualization of unsteady flow on surfaces from computational fluid dynamics. The method generates dense representations of time-dependent vector fields with high spatio-temporal correlation using both Lagrangian-Eulerian advection and image based flow visualization as its foundation. While the 3D vector fields are associated with arbitrary triangular surface meshes, the generation and advection of texture properties is confined to image space. Frame rates of up to 20 frames per second are realized by exploiting graphics card hardware. We apply this algorithm to unsteady flow on boundary surfaces of, large, complex meshes from computational fluid dynamics composed of more than 250,000 polygons, dynamic meshes with time-dependent geometry and topology, as well as medical data.
{"title":"Image space based visualization of unsteady flow on surfaces","authors":"R. Laramee, B. Jobard, H. Hauser","doi":"10.1109/VISUAL.2003.1250364","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250364","url":null,"abstract":"We present a technique for direct visualization of unsteady flow on surfaces from computational fluid dynamics. The method generates dense representations of time-dependent vector fields with high spatio-temporal correlation using both Lagrangian-Eulerian advection and image based flow visualization as its foundation. While the 3D vector fields are associated with arbitrary triangular surface meshes, the generation and advection of texture properties is confined to image space. Frame rates of up to 20 frames per second are realized by exploiting graphics card hardware. We apply this algorithm to unsteady flow on boundary surfaces of, large, complex meshes from computational fluid dynamics composed of more than 250,000 polygons, dynamic meshes with time-dependent geometry and topology, as well as medical data.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134448433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-22DOI: 10.2312/conf/EG2013/tutorials/t2
G. Daniel, Min Chen
Video data, generated by the entertainment industry, security and traffic cameras, video conferencing systems, video emails, and so on, is perhaps most time-consuming to process by human beings. In this paper, we present a novel methodology for "summarizing" video sequences using volume visualization techniques. We outline a system pipeline for capturing videos, extracting features, volume rendering video and feature data, and creating video visualization. We discuss a collection of image comparison metrics, including the linear dependence detector, for constructing "relative" and "absolute" difference volumes that represent the magnitude of variation between video frames. We describe the use of a few volume visualization techniques, including volume scene graphs and spatial transfer functions, for creating video visualization. In particular, we present a stream-based technique for processing and directly rendering video data in real time. With the aid of several examples, we demonstrate the effectiveness of using video visualization to convey meaningful information contained in video sequences.
{"title":"Video visualization","authors":"G. Daniel, Min Chen","doi":"10.2312/conf/EG2013/tutorials/t2","DOIUrl":"https://doi.org/10.2312/conf/EG2013/tutorials/t2","url":null,"abstract":"Video data, generated by the entertainment industry, security and traffic cameras, video conferencing systems, video emails, and so on, is perhaps most time-consuming to process by human beings. In this paper, we present a novel methodology for \"summarizing\" video sequences using volume visualization techniques. We outline a system pipeline for capturing videos, extracting features, volume rendering video and feature data, and creating video visualization. We discuss a collection of image comparison metrics, including the linear dependence detector, for constructing \"relative\" and \"absolute\" difference volumes that represent the magnitude of variation between video frames. We describe the use of a few volume visualization techniques, including volume scene graphs and spatial transfer functions, for creating video visualization. In particular, we present a stream-based technique for processing and directly rendering video data in real time. With the aid of several examples, we demonstrate the effectiveness of using video visualization to convey meaningful information contained in video sequences.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122112312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}