We have developed a software system that takes standard electrocardiogram (ECG) input and interprets this input along with user-defined and automatically defined markers to diagnose myocardial infarctions (MI). These pathologies are then automatically represented within a volumetric model of the heart. Over a period of six months 30 patients were monitored using a digital ECG system and this information was used to test and develop our system. It was found that the STEMIs (ST segment Elevation MI) were successfully diagnosed, however NSTEMIs (Non-STEMI), although correctly interpreted, were more ambiguous due to the fact that T wave inversions are sometimes seen on normal ECGs. Control ECGs of normal hearts were also taken. The system correctly interpreted this data as being normal. A standard voxel-count metric was developed so that future work in MI monitoring will be possible. The toolkit was found to be beneficial for three possible uses, as a diagnostic tool for clinicians, as a teaching tool for students and also as an information tool for the patient.
{"title":"A Virtual Reality Toolkit for the Diagnosis and Monitoring of Myocardial Infarctions","authors":"J. Ryan, C. O'Sullivan, C. Bell, N. Mulvihill","doi":"10.2312/VG/VG05/055-062","DOIUrl":"https://doi.org/10.2312/VG/VG05/055-062","url":null,"abstract":"We have developed a software system that takes standard electrocardiogram (ECG) input and interprets this input along with user-defined and automatically defined markers to diagnose myocardial infarctions (MI). These pathologies are then automatically represented within a volumetric model of the heart. Over a period of six months 30 patients were monitored using a digital ECG system and this information was used to test and develop our system. It was found that the STEMIs (ST segment Elevation MI) were successfully diagnosed, however NSTEMIs (Non-STEMI), although correctly interpreted, were more ambiguous due to the fact that T wave inversions are sometimes seen on normal ECGs. Control ECGs of normal hearts were also taken. The system correctly interpreted this data as being normal. A standard voxel-count metric was developed so that future work in MI monitoring will be possible. The toolkit was found to be beneficial for three possible uses, as a diagnostic tool for clinicians, as a teaching tool for students and also as an information tool for the patient.","PeriodicalId":289994,"journal":{"name":"IEEE VGTC / Eurographics International Symposium on Volume Graphics","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124588513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new method for robust generation of distance fields from triangle meshes is presented. Graphics hardware is used to accelerate a technique for generating layered depth images. From multiple layered depth images, a binary volume and a point representation are extracted. The point information is then used to convert the binary volume into a distance field. The method is robust and handles holes, spurious triangles and ambiguities. Moreover, the method lends itself to Boolean operations between solids. Since a point cloud as well as a signed distance is generated, it is possible to extract an iso-surface of the distance field and fit it to the point set. Using this method, one may recover sharp edge information. Examples are given where the method for generating distance fields coupled with mesh fitting is used to perform Boolean and morphological operations on triangle meshes.
{"title":"Robust Generation of Signed Distance Fields from Triangle Meshes","authors":"J. A. Bærentzen","doi":"10.2312/VG/VG05/167-175","DOIUrl":"https://doi.org/10.2312/VG/VG05/167-175","url":null,"abstract":"A new method for robust generation of distance fields from triangle meshes is presented. Graphics hardware is used to accelerate a technique for generating layered depth images. From multiple layered depth images, a binary volume and a point representation are extracted. The point information is then used to convert the binary volume into a distance field. The method is robust and handles holes, spurious triangles and ambiguities. Moreover, the method lends itself to Boolean operations between solids. Since a point cloud as well as a signed distance is generated, it is possible to extract an iso-surface of the distance field and fit it to the point set. Using this method, one may recover sharp edge information. Examples are given where the method for generating distance fields coupled with mesh fitting is used to perform Boolean and morphological operations on triangle meshes.","PeriodicalId":289994,"journal":{"name":"IEEE VGTC / Eurographics International Symposium on Volume Graphics","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115503752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Very large irregular-grid volume data sets are typically represented as tetrahedral mesh and require substantial disk I/O and rendering computation. One effective way to reduce this demanding resource requirement is compression. Previous research showed how rendering and decompression of a losslessly compressed irregular-grid data set can be integrated into a one-pass computation. This work advances the state of the art one step further by showing that a losslessly compressed irregular volume data set can be simplified while it is being decompressed and that simplification, decompression, and rendering can again be integrated into a pipeline that requires only a single pass through the data sets. Since simplification is a form of lossy compression, the on-the-fly volume simplification algorithm provides a powerful mechanism to dynamically create versions of a tetrahedral mesh at multiple resolution levels directly from its losslessly compressed representation, which also corresponds to the finest resolution level. In particular, an irregular-grid volume renderer can exploit this multi-resolution representation to maintain interactivity on a given hardware/software platform by automatically adjusting the amount of rendering computation that could be afforded, or performing so called time-critical rendering. The proposed tetrahedral mesh simplification algorithm and its integration with volume decompression and rendering has been successfully implemented in the Gatun system. Performance measurements on the Gatun prototype show that simplification only adds less than 5% of performance overhead on an average and with multi-resolution pre-simplification the end-to-end rendering delay indeed decreases in an approximately linear fashion with respect to the simplification ratio.
{"title":"An Integrated Pipeline of Decompression, Simplification and Rendering for Irregular Volume Data","authors":"Chuan-Kai Yang, T. Chiueh","doi":"10.2312/VG/VG05/147-155","DOIUrl":"https://doi.org/10.2312/VG/VG05/147-155","url":null,"abstract":"Very large irregular-grid volume data sets are typically represented as tetrahedral mesh and require substantial disk I/O and rendering computation. One effective way to reduce this demanding resource requirement is compression. Previous research showed how rendering and decompression of a losslessly compressed irregular-grid data set can be integrated into a one-pass computation. This work advances the state of the art one step further by showing that a losslessly compressed irregular volume data set can be simplified while it is being decompressed and that simplification, decompression, and rendering can again be integrated into a pipeline that requires only a single pass through the data sets. Since simplification is a form of lossy compression, the on-the-fly volume simplification algorithm provides a powerful mechanism to dynamically create versions of a tetrahedral mesh at multiple resolution levels directly from its losslessly compressed representation, which also corresponds to the finest resolution level. In particular, an irregular-grid volume renderer can exploit this multi-resolution representation to maintain interactivity on a given hardware/software platform by automatically adjusting the amount of rendering computation that could be afforded, or performing so called time-critical rendering. The proposed tetrahedral mesh simplification algorithm and its integration with volume decompression and rendering has been successfully implemented in the Gatun system. Performance measurements on the Gatun prototype show that simplification only adds less than 5% of performance overhead on an average and with multi-resolution pre-simplification the end-to-end rendering delay indeed decreases in an approximately linear fashion with respect to the simplification ratio.","PeriodicalId":289994,"journal":{"name":"IEEE VGTC / Eurographics International Symposium on Volume Graphics","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127595449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent advances in volume scanning techniques have made the task of acquiring volume data of 3-D objects easier and more accurate. Since the quantity of such acquired data is generally very large, a volume model capable of compressing data while maintaining a specified accuracy is required. The objective of this work is to construct a multi resolution tetrahedra representation of input volume data. This representation adapts to local field properties while preserving their discontinuities. In this paper, we present an accuracy-based adaptive sampling and reconstruction technique, we call an adaptive grid, for hierarchical tetrahedrization of C1 continuous volume data. We have developed a parallel algorithm of adaptive grid generation that recursively bisects tetrahedra gird elements by increasing the number of grid nodes, according to local field properties and such as orientation and curvature of isosurfaces, until the entire volume has been approximated within a specified level of view-invariant accuracy. We have also developed a parallel algorithm that detects and preserves both C0 and C1 discontinuities of field values, without the formation of cracks which normally occur during independent subdivision. Experimental results demonstrate the validity and effusiveness of the proposed approach.
{"title":"Accuracy-based sampling and reconstruction with adaptive grid for parallel hierarchical tetrahedrization","authors":"Hiromi T. Tanaka, Y. Takama, Hiroki Wakabayashi","doi":"10.1145/827051.827063","DOIUrl":"https://doi.org/10.1145/827051.827063","url":null,"abstract":"Recent advances in volume scanning techniques have made the task of acquiring volume data of 3-D objects easier and more accurate. Since the quantity of such acquired data is generally very large, a volume model capable of compressing data while maintaining a specified accuracy is required. The objective of this work is to construct a multi resolution tetrahedra representation of input volume data. This representation adapts to local field properties while preserving their discontinuities. In this paper, we present an accuracy-based adaptive sampling and reconstruction technique, we call an adaptive grid, for hierarchical tetrahedrization of C1 continuous volume data. We have developed a parallel algorithm of adaptive grid generation that recursively bisects tetrahedra gird elements by increasing the number of grid nodes, according to local field properties and such as orientation and curvature of isosurfaces, until the entire volume has been approximated within a specified level of view-invariant accuracy. We have also developed a parallel algorithm that detects and preserves both C0 and C1 discontinuities of field values, without the formation of cracks which normally occur during independent subdivision. Experimental results demonstrate the validity and effusiveness of the proposed approach.","PeriodicalId":289994,"journal":{"name":"IEEE VGTC / Eurographics International Symposium on Volume Graphics","volume":"4 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127087983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes the design and performance of an interactive visualization system developed specifically for improved understanding of time-varying volume data from thermal flow simulations for vehicle cabin and ventilation design. The system uses compression to allows for better memory utilization and faster data transfer, hardware accelerated rendering to enable interactive exploration, and an intuitive user interface to support comparative visualization. In particular, the interactive exploration capability offered by the system raises scientists to a new level of insight and comprehension. Compared to a previous visualization solution, such a system helps scientists more quickly identify and correct design problems.
{"title":"An interactive volume visualization system for transient flow analysis","authors":"Gabriel G. Rosa, E. Lum, K. Ma, K. Ono","doi":"10.1145/827051.827072","DOIUrl":"https://doi.org/10.1145/827051.827072","url":null,"abstract":"This paper describes the design and performance of an interactive visualization system developed specifically for improved understanding of time-varying volume data from thermal flow simulations for vehicle cabin and ventilation design. The system uses compression to allows for better memory utilization and faster data transfer, hardware accelerated rendering to enable interactive exploration, and an intuitive user interface to support comparative visualization. In particular, the interactive exploration capability offered by the system raises scientists to a new level of insight and comprehension. Compared to a previous visualization solution, such a system helps scientists more quickly identify and correct design problems.","PeriodicalId":289994,"journal":{"name":"IEEE VGTC / Eurographics International Symposium on Volume Graphics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116097091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present new implementations of the Maximum Likelihood Expectation Maximization (EM) algorithm and the related Ordered Subset EM (OSEM) algorithm. Our implementation is based on modern graphics hardware and achieves speedups of over eight times current software implementation, while reducing the RAM required to practical amounts for today's PC's. This is significant as it will make this algorithm practical for clinical use. In order to achieve a large speed up, we present bit splitting over different color channels as an accumulation strategy. We also present a novel hardware implementation for volume rendering emission data without loss of accuracy. Improved results are achieved through incorporation of attenuation correction with only a small speed penalty.
{"title":"Rapid emission tomography reconstruction","authors":"Ken Chidlow, Torsten Möller","doi":"10.1145/827051.827053","DOIUrl":"https://doi.org/10.1145/827051.827053","url":null,"abstract":"We present new implementations of the Maximum Likelihood Expectation Maximization (EM) algorithm and the related Ordered Subset EM (OSEM) algorithm. Our implementation is based on modern graphics hardware and achieves speedups of over eight times current software implementation, while reducing the RAM required to practical amounts for today's PC's. This is significant as it will make this algorithm practical for clinical use. In order to achieve a large speed up, we present bit splitting over different color channels as an accumulation strategy. We also present a novel hardware implementation for volume rendering emission data without loss of accuracy. Improved results are achieved through incorporation of attenuation correction with only a small speed penalty.","PeriodicalId":289994,"journal":{"name":"IEEE VGTC / Eurographics International Symposium on Volume Graphics","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115392240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Min Chen, D. Silver, Andrew S. Winter, Vikas Singh, N. Cornea
In this paper, we introduce the concept of spatial transfer functions as a unified approach to volume modeling and animation. A spatial transfer function is a function that defines the geometrical transformation of a scalar field in space, and is a generalization and abstraction of a variety of deformation methods. It facilitates a field based representation, and can thus be embedded into a volumetric scene graph under the algebraic framework of constructive volume geometry. We show that when spatial transfer functions are treated as spatial objects, constructive operations and conventional transfer functions can be applied to such spatial objects. We demonstrate spatial transfer functions in action with the aid of a collection of examples in volume visualization, sweeping, deformation and animation. In association with these examples, we describe methods for modeling and realizing spatial transfer functions, including simple procedural functions, operational decomposition of complex functions, large scale domain decomposition and temporal spatial transfer functions. We also discuss the implementation of spatial transfer functions in the vlib API and our efforts in deploying the technique in volume animation.
{"title":"Spatial transfer functions: a unified approach to specifying deformation in volume modeling and animation","authors":"Min Chen, D. Silver, Andrew S. Winter, Vikas Singh, N. Cornea","doi":"10.1145/827051.827056","DOIUrl":"https://doi.org/10.1145/827051.827056","url":null,"abstract":"In this paper, we introduce the concept of spatial transfer functions as a unified approach to volume modeling and animation. A spatial transfer function is a function that defines the geometrical transformation of a scalar field in space, and is a generalization and abstraction of a variety of deformation methods. It facilitates a field based representation, and can thus be embedded into a volumetric scene graph under the algebraic framework of constructive volume geometry. We show that when spatial transfer functions are treated as spatial objects, constructive operations and conventional transfer functions can be applied to such spatial objects. We demonstrate spatial transfer functions in action with the aid of a collection of examples in volume visualization, sweeping, deformation and animation. In association with these examples, we describe methods for modeling and realizing spatial transfer functions, including simple procedural functions, operational decomposition of complex functions, large scale domain decomposition and temporal spatial transfer functions. We also discuss the implementation of spatial transfer functions in the vlib API and our efforts in deploying the technique in volume animation.","PeriodicalId":289994,"journal":{"name":"IEEE VGTC / Eurographics International Symposium on Volume Graphics","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129846655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, an out-of-core data compression method is presented to encode large Finite Element Analysis (FEA) meshes. The method is comprised with two stages. At the first stage, the input FEA mesh is divided into blocks, called octants, based on an octree structure. Each octant must contain less FEA cells than a predefined limit such that it can fit into the main memory. Octants produced in the data division are stored in disk files. At the second stage, the octree is traversed to enumerate all the octants. These octants are fetched into the main memory and compressed there one by one. To compress an octant, the cell connectivities of the octant are computed. The connectivities are represented by using an adjacency graph. In the graph, a graph vertex represents an FEA cell, and if two cells are adjacent by sharing a face then an edge is drawn between the corresponding vertices of the cells. Next the adjacency graph is traversed by using a depth first search, and the mesh is split into tetrahedral strips. In a tetrahedral strip, every two consecutive cells share a face, and only one vertex reference is needed for specifying a cell. Therefore, less memory space is required for storing the mesh. According to the different situations encountered during the depth first search, the tetrahedral strips are encoded by using four types of instructions. When the traversal is completed, the tetrahedral strips are converted into a byte string and written into a disk file. To decode the compressed mesh, the instructions kept in the disk file are fetched into the main memory in blocks. For each block of instructions, the instructions are executed one by one to reconstruct the mesh. Test results reveal that the out-of-core compression method can compress large meshes on a desk-top machine with moderate memory space within reasonable time. The out-of-core method also achieves better compression ratios than an incore method which was developed in a previous research.
{"title":"Out-of-core encoding of large tetrahedral meshes","authors":"S. Ueng","doi":"10.1145/827051.827065","DOIUrl":"https://doi.org/10.1145/827051.827065","url":null,"abstract":"In this paper, an out-of-core data compression method is presented to encode large Finite Element Analysis (FEA) meshes. The method is comprised with two stages. At the first stage, the input FEA mesh is divided into blocks, called octants, based on an octree structure. Each octant must contain less FEA cells than a predefined limit such that it can fit into the main memory. Octants produced in the data division are stored in disk files. At the second stage, the octree is traversed to enumerate all the octants. These octants are fetched into the main memory and compressed there one by one. To compress an octant, the cell connectivities of the octant are computed. The connectivities are represented by using an adjacency graph. In the graph, a graph vertex represents an FEA cell, and if two cells are adjacent by sharing a face then an edge is drawn between the corresponding vertices of the cells. Next the adjacency graph is traversed by using a depth first search, and the mesh is split into tetrahedral strips. In a tetrahedral strip, every two consecutive cells share a face, and only one vertex reference is needed for specifying a cell. Therefore, less memory space is required for storing the mesh. According to the different situations encountered during the depth first search, the tetrahedral strips are encoded by using four types of instructions. When the traversal is completed, the tetrahedral strips are converted into a byte string and written into a disk file. To decode the compressed mesh, the instructions kept in the disk file are fetched into the main memory in blocks. For each block of instructions, the instructions are executed one by one to reconstruct the mesh. Test results reveal that the out-of-core compression method can compress large meshes on a desk-top machine with moderate memory space within reasonable time. The out-of-core method also achieves better compression ratios than an incore method which was developed in a previous research.","PeriodicalId":289994,"journal":{"name":"IEEE VGTC / Eurographics International Symposium on Volume Graphics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131252633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a new method for displaying time varying volumetric data. The core of the algorithm is an integration through time producing a single view volume that captures the essence of multiple time steps in a sequence. The resulting view volume then can be viewed with traditional raycasting techniques. With different time integration functions, we can generate several kinds of resulting chronovolumes, which illustrate differing types of time varying features to the user. By utilizing graphics hardware and texture memory, the integration through time can be sped up, allowing the user interactive control over the temporal transfer function and exploration of the data.
{"title":"Chronovolumes: a direct rendering technique for visualizing time-varying data","authors":"J. Woodring, Han-Wei Shen","doi":"10.1145/827051.827054","DOIUrl":"https://doi.org/10.1145/827051.827054","url":null,"abstract":"We present a new method for displaying time varying volumetric data. The core of the algorithm is an integration through time producing a single view volume that captures the essence of multiple time steps in a sequence. The resulting view volume then can be viewed with traditional raycasting techniques. With different time integration functions, we can generate several kinds of resulting chronovolumes, which illustrate differing types of time varying features to the user. By utilizing graphics hardware and texture memory, the integration through time can be sped up, allowing the user interactive control over the temporal transfer function and exploration of the data.","PeriodicalId":289994,"journal":{"name":"IEEE VGTC / Eurographics International Symposium on Volume Graphics","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122132166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}