Pub Date : 2003-10-20DOI: 10.1109/PVGS.2003.1249036
Huijuan Zhang, Timothy S Newman
A new approach for large dataset isosurface extraction is presented. The approach's aim is efficient parallel isosurfacing when the dataset cannot be processed entirely in-core. The approach focuses on reducing the memory requirement and optimizing disk I/O while achieving a balanced load. In particular, an accurate model of isosurface extraction time is exploited to evenly distribute work across processors. The approach achieves processing efficiency by also avoiding unnecessary processing for portions of the dataset that are not intersected by the isosurface. To reduce the redundant computations and the storage requirements, a flexible, variably-granular data structure is utilized, thereby achieving excellent time and space performance.
{"title":"Efficient parallel out-of-core isosurface extraction","authors":"Huijuan Zhang, Timothy S Newman","doi":"10.1109/PVGS.2003.1249036","DOIUrl":"https://doi.org/10.1109/PVGS.2003.1249036","url":null,"abstract":"A new approach for large dataset isosurface extraction is presented. The approach's aim is efficient parallel isosurfacing when the dataset cannot be processed entirely in-core. The approach focuses on reducing the memory requirement and optimizing disk I/O while achieving a balanced load. In particular, an accurate model of isosurface extraction time is exploited to evenly distribute work across processors. The approach achieves processing efficiency by also avoiding unnecessary processing for portions of the dataset that are not intersected by the isosurface. To reduce the redundant computations and the storage requirements, a flexible, variably-granular data structure is utilized, thereby achieving excellent time and space performance.","PeriodicalId":307148,"journal":{"name":"IEEE Symposium on Parallel and Large-Data Visualization and Graphics, 2003. PVG 2003.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125750535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a multilayered image cache system that is designed to work with a pool of rendering engines to facilitate a frame-less, asynchronous rendering environment for scientific visualization. Our system decouples the rendering from the display of imagery at many levels; it decouples render frequency and resolution from display frequency and resolution; allows asynchronous transmission of imagery instead of the compute-send cycle of standard parallel systems; and allows local, incremental refinement of imagery without requiring all imagery to be rerendered. Interactivity is accomplished by maintaining a set of image tiles for display while the production of imagery is performed by a pool of processors. The image tiles are placed in fixed places in camera (vs. world) space to eliminate occlusion artifacts. Display quality is improved by increasing the number of image tiles and imagery is refreshed more frequently by decreasing the number of image tiles.
{"title":"A multilayered image cache for scientific visualization","authors":"E. LaMar, Valerio Pascucci","doi":"10.1117/12.539259","DOIUrl":"https://doi.org/10.1117/12.539259","url":null,"abstract":"We introduce a multilayered image cache system that is designed to work with a pool of rendering engines to facilitate a frame-less, asynchronous rendering environment for scientific visualization. Our system decouples the rendering from the display of imagery at many levels; it decouples render frequency and resolution from display frequency and resolution; allows asynchronous transmission of imagery instead of the compute-send cycle of standard parallel systems; and allows local, incremental refinement of imagery without requiring all imagery to be rerendered. Interactivity is accomplished by maintaining a set of image tiles for display while the production of imagery is performed by a pool of processors. The image tiles are placed in fixed places in camera (vs. world) space to eliminate occlusion artifacts. Display quality is improved by increasing the number of image tiles and imagery is refreshed more frequently by decreasing the number of image tiles.","PeriodicalId":307148,"journal":{"name":"IEEE Symposium on Parallel and Large-Data Visualization and Graphics, 2003. PVG 2003.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134395581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-20DOI: 10.1109/PVGS.2003.1249039
K. Moreland, D. Thompson
We describe a new set of parallel rendering components for VTK, the visualization toolkit. The parallel rendering units allow for the rendering of vast quantities of geometry with a focus on cluster computers. Furthermore, the geometry may be displayed on tiled displays at full or reduced resolution. We demonstrate an interactive VTK application processing an isosurface consisting of nearly half a billion triangles and displaying on a power wall with a total resolution of 63 million pixels. We also demonstrate an interactive VTK application displaying the same geometry on a desktop connected to the cluster via a TCP/IP socket over 100BASE-T Ethernet.
{"title":"From cluster to wall with VTK","authors":"K. Moreland, D. Thompson","doi":"10.1109/PVGS.2003.1249039","DOIUrl":"https://doi.org/10.1109/PVGS.2003.1249039","url":null,"abstract":"We describe a new set of parallel rendering components for VTK, the visualization toolkit. The parallel rendering units allow for the rendering of vast quantities of geometry with a focus on cluster computers. Furthermore, the geometry may be displayed on tiled displays at full or reduced resolution. We demonstrate an interactive VTK application processing an isosurface consisting of nearly half a billion triangles and displaying on a power wall with a total resolution of 63 million pixels. We also demonstrate an interactive VTK application displaying the same geometry on a desktop connected to the cluster via a TCP/IP socket over 100BASE-T Ethernet.","PeriodicalId":307148,"journal":{"name":"IEEE Symposium on Parallel and Large-Data Visualization and Graphics, 2003. PVG 2003.","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115805623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-20DOI: 10.1109/PVGS.2003.1249048
Jing Chen, D. Silver, Lian Jiang
We describe a feature extraction and tracking algorithm for AMR (adaptive mesh refinement) datasets that operates within a distributed computing environment. Because features can span multiple refinement levels and multiple processors, tracking must be performed across time, across levels, and across processors. The resulting visualization is represented as a "feature tree". A feature contains multiple parts corresponding to different levels of refinements. The feature tree allows a viewer to determine that a feature splits or merges at the next refinement level, and allows a viewer to extract and isolate a multilevel isosurface and watch how that surface changes over both time and space. The algorithm is implemented within a computational steering environment, which enables the visualization routines to operate on the data in-situ (while the simulation is ongoing).
{"title":"The feature tree: visualizing feature tracking in distributed AMR datasets","authors":"Jing Chen, D. Silver, Lian Jiang","doi":"10.1109/PVGS.2003.1249048","DOIUrl":"https://doi.org/10.1109/PVGS.2003.1249048","url":null,"abstract":"We describe a feature extraction and tracking algorithm for AMR (adaptive mesh refinement) datasets that operates within a distributed computing environment. Because features can span multiple refinement levels and multiple processors, tracking must be performed across time, across levels, and across processors. The resulting visualization is represented as a \"feature tree\". A feature contains multiple parts corresponding to different levels of refinements. The feature tree allows a viewer to determine that a feature splits or merges at the next refinement level, and allows a viewer to extract and isolate a multilevel isosurface and watch how that surface changes over both time and space. The algorithm is implemented within a computational steering environment, which enables the visualization routines to operate on the data in-situ (while the simulation is ongoing).","PeriodicalId":307148,"journal":{"name":"IEEE Symposium on Parallel and Large-Data Visualization and Graphics, 2003. PVG 2003.","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121827266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-20DOI: 10.1109/PVGS.2003.1249038
D. Brodsky, J. Pedersen
As polygonal models rapidly grow to sizes orders of magnitudes bigger than the memory of commodity workstations, a viable approach to simplifying such models is parallel mesh simplification algorithms. A naive approach that divides the model into a number of equally sized chunks and distributes them to a number of potentially heterogeneous workstations is bound to fail. In severe cases the computation becomes virtually impossible due to significant slow downs because of memory thrashing. We present a general parallel framework for simplification of very large meshes. This framework ensures a near optimal utilization of the computational resources in a cluster of workstations by providing an intelligent partitioning of the model. This partitioning ensures a high quality output, low runtime due to intelligent load balancing, and high parallel efficiency by providing total memory utilization of each machine, thus guaranteeing not to trash the virtual memory system. To test the usability of our framework we have implemented a parallel version of R-Simp [Brodsky and Watson 2000].
{"title":"A parallel framework for simplification of massive meshes","authors":"D. Brodsky, J. Pedersen","doi":"10.1109/PVGS.2003.1249038","DOIUrl":"https://doi.org/10.1109/PVGS.2003.1249038","url":null,"abstract":"As polygonal models rapidly grow to sizes orders of magnitudes bigger than the memory of commodity workstations, a viable approach to simplifying such models is parallel mesh simplification algorithms. A naive approach that divides the model into a number of equally sized chunks and distributes them to a number of potentially heterogeneous workstations is bound to fail. In severe cases the computation becomes virtually impossible due to significant slow downs because of memory thrashing. We present a general parallel framework for simplification of very large meshes. This framework ensures a near optimal utilization of the computational resources in a cluster of workstations by providing an intelligent partitioning of the model. This partitioning ensures a high quality output, low runtime due to intelligent load balancing, and high parallel efficiency by providing total memory utilization of each machine, thus guaranteeing not to trash the virtual memory system. To test the usability of our framework we have implemented a parallel version of R-Simp [Brodsky and Watson 2000].","PeriodicalId":307148,"journal":{"name":"IEEE Symposium on Parallel and Large-Data Visualization and Graphics, 2003. PVG 2003.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123823412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-20DOI: 10.1109/PVGS.2003.1249046
David E. DeMarle, S. Parker, M. Hartner, C. Gribble, C. Hansen
We have constructed a distributed parallel ray tracing system that interactively produces isosurface renderings from large data sets on a cluster of commodity PCs. The program was derived from the SCI Institute's interactive ray tracer (*-Ray), which utilizes small to large shared memory platforms, such as the SGI Origin series, to interact with very large-scale data sets. Making this approach work efficiently on a cluster requires attention to numerous system-level issues, especially when rendering data sets larger than the address space of each cluster node. The rendering engine is an image parallel ray tracer with a supervisor/workers organization. Each node in the cluster runs a multithreaded application. A minimal abstraction layer on top of TCP links the nodes, and enables asynchronous message handling. For large volumes, render threads obtain data bricks on demand from an object-based software distributed shared memory. Caching improves performance by reducing the amount of data transfers for a reasonable working set size. For large data sets, the cluster-based interactive ray tracer performs comparably with an SGI Origin system. We examine the parameter space of the renderer and provide experimental results for interactive rendering of large (7.5 GB) data sets.
{"title":"Distributed interactive ray tracing for large volume visualization","authors":"David E. DeMarle, S. Parker, M. Hartner, C. Gribble, C. Hansen","doi":"10.1109/PVGS.2003.1249046","DOIUrl":"https://doi.org/10.1109/PVGS.2003.1249046","url":null,"abstract":"We have constructed a distributed parallel ray tracing system that interactively produces isosurface renderings from large data sets on a cluster of commodity PCs. The program was derived from the SCI Institute's interactive ray tracer (*-Ray), which utilizes small to large shared memory platforms, such as the SGI Origin series, to interact with very large-scale data sets. Making this approach work efficiently on a cluster requires attention to numerous system-level issues, especially when rendering data sets larger than the address space of each cluster node. The rendering engine is an image parallel ray tracer with a supervisor/workers organization. Each node in the cluster runs a multithreaded application. A minimal abstraction layer on top of TCP links the nodes, and enables asynchronous message handling. For large volumes, render threads obtain data bricks on demand from an object-based software distributed shared memory. Caching improves performance by reducing the amount of data transfers for a reasonable working set size. For large data sets, the cluster-based interactive ray tracer performs comparably with an SGI Origin system. We examine the parameter space of the renderer and provide experimental results for interactive rendering of large (7.5 GB) data sets.","PeriodicalId":307148,"journal":{"name":"IEEE Symposium on Parallel and Large-Data Visualization and Graphics, 2003. PVG 2003.","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128608643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-20DOI: 10.1109/PVGS.2003.1249040
Aleksander Stompel, K. Ma, E. Lum, J. Ahrens, J. Patchett
Parallel volume rendering offers a feasible solution to the large data visualization problem by distributing both the data and rendering calculations among multiple computers connected by a network. In sort-last parallel volume rendering, each processor generates an image of its assigned subvolume, which is blended together with other images to derive the final image. Improving the efficiency of this compositing step, which requires interprocesssor communication, is the key to scalable, interactive rendering. The recent trend of using hardware-accelerated volume rendering demands further acceleration of the image compositing step. We present a new optimized parallel image compositing algorithm and its performance on a PC cluster. Our test results show that this new algorithm offers significant savings over previous algorithms in both communication and compositing costs. On a 64-node PC cluster with a 100BaseT network interconnect, we can achieve interactive rendering rates for images at resolutions up to 1024x1024 pixels at several frames per second.
{"title":"SLIC: scheduled linear image compositing for parallel volume rendering","authors":"Aleksander Stompel, K. Ma, E. Lum, J. Ahrens, J. Patchett","doi":"10.1109/PVGS.2003.1249040","DOIUrl":"https://doi.org/10.1109/PVGS.2003.1249040","url":null,"abstract":"Parallel volume rendering offers a feasible solution to the large data visualization problem by distributing both the data and rendering calculations among multiple computers connected by a network. In sort-last parallel volume rendering, each processor generates an image of its assigned subvolume, which is blended together with other images to derive the final image. Improving the efficiency of this compositing step, which requires interprocesssor communication, is the key to scalable, interactive rendering. The recent trend of using hardware-accelerated volume rendering demands further acceleration of the image compositing step. We present a new optimized parallel image compositing algorithm and its performance on a PC cluster. Our test results show that this new algorithm offers significant savings over previous algorithms in both communication and compositing costs. On a 64-node PC cluster with a 100BaseT network interconnect, we can achieve interactive rendering rates for images at resolutions up to 1024x1024 pixels at several frames per second.","PeriodicalId":307148,"journal":{"name":"IEEE Symposium on Parallel and Large-Data Visualization and Graphics, 2003. PVG 2003.","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114229768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-20DOI: 10.1109/PVGS.2003.1249042
G. Weber, Martin Öhler, O. Kreylos, J. Shalf, E. W. Bethel, B. Hamann, G. Scheuermann
Adaptive mesh refinement (AMR) is a technique used in numerical simulations to automatically refine (or de-refine) certain regions of the physical domain in a finite difference calculation. AMR data consists of nested hierarchies of data grids. As AMR visualization is still a relatively unexplored topic, our work is motivated by the need to perform efficient visualization of large AMR data sets. We present a software algorithm for parallel direct volume rendering of AMR data using a cell-projection technique on several different parallel platforms. Our algorithm can use one of several different distribution methods, and we present performance results for each of these alternative approaches. By partitioning an AMR data set into blocks of constant resolution and estimating rendering costs of individual blocks using an application specific benchmark, it is possible to achieve even load balancing.
{"title":"Parallel cell projection rendering of adaptive mesh refinement data","authors":"G. Weber, Martin Öhler, O. Kreylos, J. Shalf, E. W. Bethel, B. Hamann, G. Scheuermann","doi":"10.1109/PVGS.2003.1249042","DOIUrl":"https://doi.org/10.1109/PVGS.2003.1249042","url":null,"abstract":"Adaptive mesh refinement (AMR) is a technique used in numerical simulations to automatically refine (or de-refine) certain regions of the physical domain in a finite difference calculation. AMR data consists of nested hierarchies of data grids. As AMR visualization is still a relatively unexplored topic, our work is motivated by the need to perform efficient visualization of large AMR data sets. We present a software algorithm for parallel direct volume rendering of AMR data using a cell-projection technique on several different parallel platforms. Our algorithm can use one of several different distribution methods, and we present performance results for each of these alternative approaches. By partitioning an AMR data set into blocks of constant resolution and estimating rendering costs of individual blocks using an application specific benchmark, it is possible to achieve even load balancing.","PeriodicalId":307148,"journal":{"name":"IEEE Symposium on Parallel and Large-Data Visualization and Graphics, 2003. PVG 2003.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122340838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-20DOI: 10.1109/PVGS.2003.1249049
Dirk Staneker, D. Bartz, M. Meissner
Image space occlusion culling is an useful approach to reduce the rendering load of large polygonal models. Like most large model techniques, it trades overhead costs with the rendering costs of the possibly occluded geometry. Meanwhile, modern graphics hardware supports occlusion culling, whereas they associate a significant query overhead, which hurts in particular, if the occlusion culling query itself was unsuccessful. We propose the occupancy map - a compact, cache-optimized representation of coverage information - to reduce the number of costly but unsuccessful occlusion culling queries and to arrange multiple occlusion queries. The information of the occupancy map is used to skip an occlusion query, if the respective map area is not yet set $the respective area has not yet received rendered pixels -, hence an occlusion query would always return not occluded. The remaining occlusion information is efficiently determined by asynchronous multiple occlusion queries with hardware-supported query functionality. To avoid redundant results, we arrange these multiple occlusion queries according to the information of several occupancy maps. Our presented technique is conservative and benefits from a partial depth order of the geometry.
{"title":"Improving occlusion query efficiency with occupancy maps","authors":"Dirk Staneker, D. Bartz, M. Meissner","doi":"10.1109/PVGS.2003.1249049","DOIUrl":"https://doi.org/10.1109/PVGS.2003.1249049","url":null,"abstract":"Image space occlusion culling is an useful approach to reduce the rendering load of large polygonal models. Like most large model techniques, it trades overhead costs with the rendering costs of the possibly occluded geometry. Meanwhile, modern graphics hardware supports occlusion culling, whereas they associate a significant query overhead, which hurts in particular, if the occlusion culling query itself was unsuccessful. We propose the occupancy map - a compact, cache-optimized representation of coverage information - to reduce the number of costly but unsuccessful occlusion culling queries and to arrange multiple occlusion queries. The information of the occupancy map is used to skip an occlusion query, if the respective map area is not yet set $the respective area has not yet received rendered pixels -, hence an occlusion query would always return not occluded. The remaining occlusion information is efficiently determined by asynchronous multiple occlusion queries with hardware-supported query functionality. To avoid redundant results, we arrange these multiple occlusion queries according to the information of several occupancy maps. Our presented technique is conservative and benefits from a partial depth order of the geometry.","PeriodicalId":307148,"journal":{"name":"IEEE Symposium on Parallel and Large-Data Visualization and Graphics, 2003. PVG 2003.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127543458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-20DOI: 10.1109/PVGS.2003.1249044
A. Binotto, J. Comba, C. Freitas
The recent advance of graphics hardware allowed real-time volume rendering of structured grids using a 3D texturing approach. The next challenging problem is to extend the algorithms to time-varying volumetric data (4D functions), which consume more storage and are not directly supported in current graphics hardware. Here we present a new visualization technique that includes (1) a compression scheme of sparse 4D functions into 3D textures, and (2) a visualization algorithm that decompress the stored data from the 3D textures using the programmability of fragment shaders, allowing real-time visualization of such data. We illustrate the system in action with datasets resulting from computational fluid dynamics simulations.
{"title":"Real-time volume rendering of time-varying data using a fragment-shader compression approach","authors":"A. Binotto, J. Comba, C. Freitas","doi":"10.1109/PVGS.2003.1249044","DOIUrl":"https://doi.org/10.1109/PVGS.2003.1249044","url":null,"abstract":"The recent advance of graphics hardware allowed real-time volume rendering of structured grids using a 3D texturing approach. The next challenging problem is to extend the algorithms to time-varying volumetric data (4D functions), which consume more storage and are not directly supported in current graphics hardware. Here we present a new visualization technique that includes (1) a compression scheme of sparse 4D functions into 3D textures, and (2) a visualization algorithm that decompress the stored data from the 3D textures using the programmability of fragment shaders, allowing real-time visualization of such data. We illustrate the system in action with datasets resulting from computational fluid dynamics simulations.","PeriodicalId":307148,"journal":{"name":"IEEE Symposium on Parallel and Large-Data Visualization and Graphics, 2003. PVG 2003.","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122455565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}