Pub Date : 2003-10-22DOI: 10.1109/VISUAL.2003.1250365
P. Bremer, H. Edelsbrunner, B. Hamann, Valerio Pascucci
We combine topological and geometric methods to construct a multi-resolution data structure for functions over two-dimensional domains. Starting with the Morse-Smale complex, we construct a topological hierarchy by progressively canceling critical points in pairs. Concurrently, we create a geometric hierarchy by adapting the geometry to the changes in topology. The data structure supports mesh traversal operations similarly to traditional multi-resolution representations.
{"title":"A multi-resolution data structure for two-dimensional Morse-Smale functions","authors":"P. Bremer, H. Edelsbrunner, B. Hamann, Valerio Pascucci","doi":"10.1109/VISUAL.2003.1250365","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250365","url":null,"abstract":"We combine topological and geometric methods to construct a multi-resolution data structure for functions over two-dimensional domains. Starting with the Morse-Smale complex, we construct a topological hierarchy by progressively canceling critical points in pairs. Concurrently, we create a geometric hierarchy by adapting the geometry to the changes in topology. The data structure supports mesh traversal operations similarly to traditional multi-resolution representations.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123924292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-22DOI: 10.1109/VISUAL.2003.1250415
S. Teoh, K. Ma, S. F. Wu
The Internet pervades many aspects of our lives and is becoming indispensable to critical functions in areas such as commerce, government, production and general information dissemination. To maintain the stability and efficiency of the Internet, every effort must be made to protect it against various forms of attacks, malicious users, and errors. A key component in the Internet security effort is the routine examination of Internet routing data, which unfortunately can be too large and complicated to browse directly. We have developed an interactive visualization process which proves to be very effective for the analysis of Internet routing data. In this application paper, we show how each step in the visualization process helps direct the analysis and glean insights from the data. These insights include the discovery of patterns, detection of faults and abnormal events, understanding of event correlations, formation of causation hypotheses, and classification of anomalies. We also discuss lessons learned in our visual analysis study.
{"title":"A visual exploration process for the analysis of Internet routing data","authors":"S. Teoh, K. Ma, S. F. Wu","doi":"10.1109/VISUAL.2003.1250415","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250415","url":null,"abstract":"The Internet pervades many aspects of our lives and is becoming indispensable to critical functions in areas such as commerce, government, production and general information dissemination. To maintain the stability and efficiency of the Internet, every effort must be made to protect it against various forms of attacks, malicious users, and errors. A key component in the Internet security effort is the routine examination of Internet routing data, which unfortunately can be too large and complicated to browse directly. We have developed an interactive visualization process which proves to be very effective for the analysis of Internet routing data. In this application paper, we show how each step in the visualization process helps direct the analysis and glean insights from the data. These insights include the discovery of patterns, detection of faults and abnormal events, understanding of event correlations, formation of causation hypotheses, and classification of anomalies. We also discuss lessons learned in our visual analysis study.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116641801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-22DOI: 10.1109/VISUAL.2003.1250350
Jeremy Jaech, S. North, Mike Peery, Will Schroeder, James J. Thomas
This forum examines the question of when commercial software makes sense and when open-source software is more appropriate. Visualization software runs the gamut from very general-purpose applications intended for the graphically challenged to highly specific software libraries intended for the sophisticated visualization expert. Where along that spectrum is the realm of commercial software, and where does open-source software make sense? Should all visualization software be open source? Why might end users choose to purchase visualization tools instead of using opensource tools?
{"title":"The visualization market: open source vs. commercial approaches","authors":"Jeremy Jaech, S. North, Mike Peery, Will Schroeder, James J. Thomas","doi":"10.1109/VISUAL.2003.1250350","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250350","url":null,"abstract":"This forum examines the question of when commercial software makes sense and when open-source software is more appropriate. Visualization software runs the gamut from very general-purpose applications intended for the graphically challenged to highly specific software libraries intended for the sophisticated visualization expert. Where along that spectrum is the realm of commercial software, and where does open-source software make sense? Should all visualization software be open source? Why might end users choose to purchase visualization tools instead of using opensource tools?","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115345923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-22DOI: 10.1109/VISUAL.2003.1250381
M. Ikits, J. D. Brederson, C. Hansen, Christopher R. Johnson
We present a haptic rendering technique that uses directional constraints to facilitate enhanced exploration modes for volumetric datasets. The algorithm restricts user motion in certain directions by incrementally moving a proxy point along the axes of a local reference frame. Reaction forces are generated by a spring coupler between the proxy and the data probe, which can be tuned to the capabilities of the haptic interface. Secondary haptic effects including field forces, friction, and texture can be easily incorporated to convey information about additional characteristics of the data. We illustrate the technique with two examples: displaying fiber orientation in heart muscle layers and exploring diffusion tensor fiber tracts in brain white matter tissue. Initial evaluation of the approach indicates that haptic constraints provide an intuitive means or displaying directional information in volume data.
{"title":"A constraint-based technique for haptic volume exploration","authors":"M. Ikits, J. D. Brederson, C. Hansen, Christopher R. Johnson","doi":"10.1109/VISUAL.2003.1250381","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250381","url":null,"abstract":"We present a haptic rendering technique that uses directional constraints to facilitate enhanced exploration modes for volumetric datasets. The algorithm restricts user motion in certain directions by incrementally moving a proxy point along the axes of a local reference frame. Reaction forces are generated by a spring coupler between the proxy and the data probe, which can be tuned to the capabilities of the haptic interface. Secondary haptic effects including field forces, friction, and texture can be easily incorporated to convey information about additional characteristics of the data. We illustrate the technique with two examples: displaying fiber orientation in heart muscle layers and exploring diffusion tensor fiber tracts in brain white matter tissue. Initial evaluation of the approach indicates that haptic constraints provide an intuitive means or displaying directional information in volume data.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130085607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-22DOI: 10.1109/VISUAL.2003.1250376
H. Theisel, T. Weinkauf, H. Hege, H. Seidel
One of the reasons that topological methods have a limited popularity for the visualization of complex 3D flow fields is the fact that such topological structures contain a number of separating stream surfaces. Since these stream surfaces tend to hide each other as well as other topological features, for complex 3D topologies the visualizations become cluttered and hardly interpretable. This paper proposes to use particular stream lines called saddle connectors instead of separating stream surfaces and to depict single surfaces only on user demand. We discuss properties and computational issues of saddle connectors and apply these methods to complex flow data. We show that the use of saddle connectors makes topological skeletons available as a valuable visualization tool even for topologically complex 3D flow data.
{"title":"Saddle connectors - an approach to visualizing the topological skeleton of complex 3D vector fields","authors":"H. Theisel, T. Weinkauf, H. Hege, H. Seidel","doi":"10.1109/VISUAL.2003.1250376","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250376","url":null,"abstract":"One of the reasons that topological methods have a limited popularity for the visualization of complex 3D flow fields is the fact that such topological structures contain a number of separating stream surfaces. Since these stream surfaces tend to hide each other as well as other topological features, for complex 3D topologies the visualizations become cluttered and hardly interpretable. This paper proposes to use particular stream lines called saddle connectors instead of separating stream surfaces and to depict single surfaces only on user demand. We discuss properties and computational issues of saddle connectors and apply these methods to complex flow data. We show that the use of saddle connectors makes topological skeletons available as a valuable visualization tool even for topologically complex 3D flow data.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"1992 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125530591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-22DOI: 10.1109/VISUAL.2003.1250410
R. Tam, W. Heidrich
We present a new algorithm for simplifying the shape of 3D objects by manipulating their medial axis transform (MAT). From an unorganized set of boundary points, our algorithm computes the MAT, decomposes the axis into parts, then selectively removes a subset of these parts in order to reduce the complexity of the overall shape. The result is simplified MAT that can be used for a variety of shape operations. In addition, a polygonal surface of the resulting shape can be directly generated from the filtered MAT using a robust surface reconstruction method. The algorithm presented is shown to have a number of advantages over other existing approaches.
{"title":"Shape simplification based on the medial axis transform","authors":"R. Tam, W. Heidrich","doi":"10.1109/VISUAL.2003.1250410","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250410","url":null,"abstract":"We present a new algorithm for simplifying the shape of 3D objects by manipulating their medial axis transform (MAT). From an unorganized set of boundary points, our algorithm computes the MAT, decomposes the axis into parts, then selectively removes a subset of these parts in order to reduce the complexity of the overall shape. The result is simplified MAT that can be used for a variety of shape operations. In addition, a polygonal surface of the resulting shape can be directly generated from the filtered MAT using a robust surface reconstruction method. The algorithm presented is shown to have a number of advantages over other existing approaches.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126366003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-22DOI: 10.1109/VISUAL.2003.1250407
M. Tarini, Paolo Cignoni, Roberto Scopigno
In this paper we propose a new method for the creation of normal maps for recovering the detail on simplified meshes and a set of objective techniques to metrically evaluate the quality of different recovering techniques. The proposed techniques, that automatically produces a normal-map texture for a simple 3D model that "imitates" the high frequency detail originally present in a second, much higher resolution one, is based on the computation of per-texel visibility and self-occlusion information. This information is used to define a point-to-point correspondence between simplified and hires meshes. Moreover, we introduce a number of criteria for measuring the quality (visual or otherwise) of a given mapping method, and provide efficient algorithms to implement them. Lastly, we apply them to rate different mapping methods, including the widely used ones and the new one proposed here.
{"title":"Visibility based methods and assessment for detail-recovery","authors":"M. Tarini, Paolo Cignoni, Roberto Scopigno","doi":"10.1109/VISUAL.2003.1250407","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250407","url":null,"abstract":"In this paper we propose a new method for the creation of normal maps for recovering the detail on simplified meshes and a set of objective techniques to metrically evaluate the quality of different recovering techniques. The proposed techniques, that automatically produces a normal-map texture for a simple 3D model that \"imitates\" the high frequency detail originally present in a second, much higher resolution one, is based on the computation of per-texel visibility and self-occlusion information. This information is used to define a point-to-point correspondence between simplified and hires meshes. Moreover, we introduce a number of criteria for measuring the quality (visual or otherwise) of a given mapping method, and provide efficient algorithms to implement them. Lastly, we apply them to rate different mapping methods, including the widely used ones and the new one proposed here.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127465420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-22DOI: 10.1109/VISUAL.2003.1250387
I. Viola, A. Kanitsar, E. Gröller
Non-linear filtering is an important task for volume analysis. This paper presents hardware-based implementations of various non-linear filters for volume smoothing with edge preservation. The Cg high-level shading language is used in combination with latest PC consumer graphics hardware. Filtering is divided into pervertex and per-fragment stages. In both stages we propose techniques to increase the filtering performance. The vertex program pre-computes texture coordinates in order to address all contributing input samples of the operator mask. Thus additional computations are avoided in the fragment program. The presented fragment programs preserve cache coherence, exploit 4D vector arithmetic, and internal fixed point arithmetic to increase performance. We show the applicability of non-linear filters as part of a GPU-based segmentation pipeline. The resulting binary mask is compressed and decompressed in the graphics memory on-the-fly.
{"title":"Hardware-based nonlinear filtering and segmentation using high-level shading languages","authors":"I. Viola, A. Kanitsar, E. Gröller","doi":"10.1109/VISUAL.2003.1250387","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250387","url":null,"abstract":"Non-linear filtering is an important task for volume analysis. This paper presents hardware-based implementations of various non-linear filters for volume smoothing with edge preservation. The Cg high-level shading language is used in combination with latest PC consumer graphics hardware. Filtering is divided into pervertex and per-fragment stages. In both stages we propose techniques to increase the filtering performance. The vertex program pre-computes texture coordinates in order to address all contributing input samples of the operator mask. Thus additional computations are avoided in the fragment program. The presented fragment programs preserve cache coherence, exploit 4D vector arithmetic, and internal fixed point arithmetic to increase performance. We show the applicability of non-linear filters as part of a GPU-based segmentation pipeline. The resulting binary mask is compressed and decompressed in the graphics memory on-the-fly.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131150158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-22DOI: 10.1109/VISUAL.2003.1250429
R. Machiraju, Chris R. Johnson, T. Yoo, R. Crawfis, D. Ebert, D. Stredney
Raw data from scanners and simulations has insight embedded within it. However, there is a need to explicitly glean the insight from the data or a version of it. Visualization algorithms and methods are designed just to do that. What insight is to be gleaned depends on the data, its use, and the medium of display. Thus, visualization embodies all tasks that increase information content and understanding when presented to the users.
{"title":"Do I really see a bone?","authors":"R. Machiraju, Chris R. Johnson, T. Yoo, R. Crawfis, D. Ebert, D. Stredney","doi":"10.1109/VISUAL.2003.1250429","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250429","url":null,"abstract":"Raw data from scanners and simulations has insight embedded within it. However, there is a need to explicitly glean the insight from the data or a version of it. Visualization algorithms and methods are designed just to do that. What insight is to be gleaned depends on the data, its use, and the medium of display. Thus, visualization embodies all tasks that increase information content and understanding when presented to the users.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133076056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-22DOI: 10.1109/VISUAL.2003.1250413
Fan-Yin Tzeng, E. Lum, K. Ma
In the traditional volume visualization paradigm, the user specifies a transfer function that assigns each scalar value to a color and opacity by defining an opacity and a color map function. The transfer function has two limitations. First, the user must define curves based on histogram and value rather than seeing and working with the volume itself. Second, the transfer function is inflexible in classifying regions of interest, where values at a voxel such as intensity and gradient are used to differentiate material, not talking into account additional properties such as texture and position. We describe an intuitive user interface for specifying the classification functions that consists of the users painting directly on sample slices of the volume. These painted regions are used to automatically define high-dimensional classification functions that can be implemented in hardware for interactive rendering. The classification of the volume is iteratively improved as the user paints samples, allowing intuitive and efficient viewing of materials of interest.
{"title":"A novel interface for higher-dimensional classification of volume data","authors":"Fan-Yin Tzeng, E. Lum, K. Ma","doi":"10.1109/VISUAL.2003.1250413","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250413","url":null,"abstract":"In the traditional volume visualization paradigm, the user specifies a transfer function that assigns each scalar value to a color and opacity by defining an opacity and a color map function. The transfer function has two limitations. First, the user must define curves based on histogram and value rather than seeing and working with the volume itself. Second, the transfer function is inflexible in classifying regions of interest, where values at a voxel such as intensity and gradient are used to differentiate material, not talking into account additional properties such as texture and position. We describe an intuitive user interface for specifying the classification functions that consists of the users painting directly on sample slices of the volume. These painted regions are used to automatically define high-dimensional classification functions that can be implemented in hardware for interactive rendering. The classification of the volume is iteratively improved as the user paints samples, allowing intuitive and efficient viewing of materials of interest.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123356230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}