{"title":"Volume Exploration Using Spatially Linked Transfer Functions","authors":"V. Vaidya, R. Mullick, N. Subramanian","doi":"10.1109/VIS.2005.131","DOIUrl":"https://doi.org/10.1109/VIS.2005.131","url":null,"abstract":"","PeriodicalId":91181,"journal":{"name":"Visualization : proceedings of the ... IEEE Conference on Visualization. IEEE Conference on Visualization","volume":"37 1","pages":"96"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76196483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shangshu Cai, Q. Du, R. Moorhead, M. J. Mohammadi-Aragh, D. Irby
Introduction In recent years, hyperspectral imaging has been developed in remote sensing, which uses hundreds of co-registered spectral channels to acquires images for the same area on the earth. Its high spectral resolution enables researchers and scientists to detect features, classify objects, and extract ground information more accurately. PCA [1] is a typical approach for high-dimensional data analysis, which assembles the major data information into the first several principal components (PCs) based on variance maximization. However, variance is not a good criterion to rank the data features because part of the variance may be from noise. The noise should be whitened before PCA, which is equivalently to rank the PCs in terms of signal-to-noise ratio. The resultant technique is called Noise-Adjusted Principal Component Analysis (NAPCA) [2]. In our research, NAPCA is employed to visualize images taken by Hyperion, the first spaceborne hyperspectral sensor onboard NASA’s EO-1 satellite.
{"title":"Noise-Adjusted Principle Component Analysis For Hyperspectral Remotely Sensed Imagery Visualization","authors":"Shangshu Cai, Q. Du, R. Moorhead, M. J. Mohammadi-Aragh, D. Irby","doi":"10.1109/VIS.2005.70","DOIUrl":"https://doi.org/10.1109/VIS.2005.70","url":null,"abstract":"Introduction In recent years, hyperspectral imaging has been developed in remote sensing, which uses hundreds of co-registered spectral channels to acquires images for the same area on the earth. Its high spectral resolution enables researchers and scientists to detect features, classify objects, and extract ground information more accurately. PCA [1] is a typical approach for high-dimensional data analysis, which assembles the major data information into the first several principal components (PCs) based on variance maximization. However, variance is not a good criterion to rank the data features because part of the variance may be from noise. The noise should be whitened before PCA, which is equivalently to rank the PCs in terms of signal-to-noise ratio. The resultant technique is called Noise-Adjusted Principal Component Analysis (NAPCA) [2]. In our research, NAPCA is employed to visualize images taken by Hyperion, the first spaceborne hyperspectral sensor onboard NASA’s EO-1 satellite.","PeriodicalId":91181,"journal":{"name":"Visualization : proceedings of the ... IEEE Conference on Visualization. IEEE Conference on Visualization","volume":"31 1","pages":"105"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76891302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thanks to improvements in simulation tools, high resolution scanning facilities and multidimensional medical imaging, huge datasets are commonly available. Multi-resolution models manage the complexity of such data sets, by varying resolution and focusing detail in specific areas of interests. Since many currently available data sets cannot fit in main memory, the need arises to design data structures, construction and query algorithms for multi-resolution models which work in secondary memory. Several techniques have been proposed in the literature for outof-core simplification of triangle meshes, while much fewer techniques support multi-resolution modeling. Some such techniques only deal with terrain data [2, 8, 10, 11]. Techniques proposed in [3, 6, 7, 9, 14] have been developed for free-form surface modeling and most of them are based on space partitioning. Our goal is to design and develop a general technique for irregularly distribuited data describing two and three-dimension scalar fields and free-form surfaces. In the spirit of our previous work, we define a general out-of-core strategy for a model that is independent of both the dimension and the specific simplification strategy used to generate it, i.e., the Multi-Tessellation (MT) [12, 5]. The MT consists of a coarse mesh plus a collection of refinement modifications organized according to a dependency relation, which guides extracting topologically consistent meshes at variable resolution. We have shown that the other multi-resolution data structures developed in the literature are specific instances of an MT. Thus, data structures optimized on the basis of a specific simplification operator, like edge collapse or vertex removal, could be derived from a general out-of-core MT. The basic queries on a multi-resolution model are instances of selective refinement, which consists of extracting adaptive meshes of minimal size according to application-dependent requirements. We have first analyzed the I/O operations performed by selective refinement algorithms and designed and implemented a simulation environment which allows us to evaluate a large number of data structures for encoding a MT out-of-core. We have designed and developed more than sixty clustering techniques for the modifications forming a MT, which take into account their mutual dependency relations and their arrangement in space. Based on the data structure selected through this investigation, we are currently developing an out-of-core prototype system for multi-resolution modeling which is independent of the way single modifications are encoded.
{"title":"Clustering Techniques for Out-of-Core Multi-resolution Modeling","authors":"E. Danovaro, L. Floriani, E. Puppo, H. Samet","doi":"10.1109/VIS.2005.15","DOIUrl":"https://doi.org/10.1109/VIS.2005.15","url":null,"abstract":"Thanks to improvements in simulation tools, high resolution scanning facilities and multidimensional medical imaging, huge datasets are commonly available. Multi-resolution models manage the complexity of such data sets, by varying resolution and focusing detail in specific areas of interests. Since many currently available data sets cannot fit in main memory, the need arises to design data structures, construction and query algorithms for multi-resolution models which work in secondary memory. Several techniques have been proposed in the literature for outof-core simplification of triangle meshes, while much fewer techniques support multi-resolution modeling. Some such techniques only deal with terrain data [2, 8, 10, 11]. Techniques proposed in [3, 6, 7, 9, 14] have been developed for free-form surface modeling and most of them are based on space partitioning. Our goal is to design and develop a general technique for irregularly distribuited data describing two and three-dimension scalar fields and free-form surfaces. In the spirit of our previous work, we define a general out-of-core strategy for a model that is independent of both the dimension and the specific simplification strategy used to generate it, i.e., the Multi-Tessellation (MT) [12, 5]. The MT consists of a coarse mesh plus a collection of refinement modifications organized according to a dependency relation, which guides extracting topologically consistent meshes at variable resolution. We have shown that the other multi-resolution data structures developed in the literature are specific instances of an MT. Thus, data structures optimized on the basis of a specific simplification operator, like edge collapse or vertex removal, could be derived from a general out-of-core MT. The basic queries on a multi-resolution model are instances of selective refinement, which consists of extracting adaptive meshes of minimal size according to application-dependent requirements. We have first analyzed the I/O operations performed by selective refinement algorithms and designed and implemented a simulation environment which allows us to evaluate a large number of data structures for encoding a MT out-of-core. We have designed and developed more than sixty clustering techniques for the modifications forming a MT, which take into account their mutual dependency relations and their arrangement in space. Based on the data structure selected through this investigation, we are currently developing an out-of-core prototype system for multi-resolution modeling which is independent of the way single modifications are encoded.","PeriodicalId":91181,"journal":{"name":"Visualization : proceedings of the ... IEEE Conference on Visualization. IEEE Conference on Visualization","volume":"104 1","pages":"113"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79533382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Constructive Solid Geometry (CSG) is widely used in modeling tools such as CAD applications. A number of algorithms exists for different scenarios, e.g. for interactive use of CSG modeling the Z-buffer algorithm is a simple but very efficient approach [1], known as the Goldfeather algorithm. For larger CSG trees and noninteractive modeling B-rep algorithms calculate the resulting polygons for a view-independent representation, but are too slow for real-time calculation. A good overview of the different approaches to CSG is given in [2], which also was the motivation for our work.
{"title":"Interactive CSG Trees Inside Complex Scenes","authors":"Jan Ohlenburg, Jan Müller","doi":"10.1109/VIS.2005.59","DOIUrl":"https://doi.org/10.1109/VIS.2005.59","url":null,"abstract":"Constructive Solid Geometry (CSG) is widely used in modeling tools such as CAD applications. A number of algorithms exists for different scenarios, e.g. for interactive use of CSG modeling the Z-buffer algorithm is a simple but very efficient approach [1], known as the Goldfeather algorithm. For larger CSG trees and noninteractive modeling B-rep algorithms calculate the resulting polygons for a view-independent representation, but are too slow for real-time calculation. A good overview of the different approaches to CSG is given in [2], which also was the motivation for our work.","PeriodicalId":91181,"journal":{"name":"Visualization : proceedings of the ... IEEE Conference on Visualization. IEEE Conference on Visualization","volume":"14 1","pages":"106"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74408377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tom Arodz, K. Boryczko, W. Dzwinel, Marcin Kurdziel, D. Yuen
Molecular biology is a source of vast quantities of information. Nucleotide sequences, gene expression patterns, protein abundances, sequences and structures, drug activities, gene and metabolic networks are being harvested at laboratories throughout the world. The collected data can be represented by multidimensional feature vectors or by descriptors, which are less formalized, yet still allow one to define similarity relations among objects. Both data representations can be analyzed using data mining and pattern recognition tools. Such tools should allow for interactive, 3-D visual exploration of multidimensional data space by the bio-specialist, rather than for automatic data processing.
{"title":"Visual Exploration of Multidimensional Feature Space of Biological Data","authors":"Tom Arodz, K. Boryczko, W. Dzwinel, Marcin Kurdziel, D. Yuen","doi":"10.1109/VIS.2005.115","DOIUrl":"https://doi.org/10.1109/VIS.2005.115","url":null,"abstract":"Molecular biology is a source of vast quantities of information. Nucleotide sequences, gene expression patterns, protein abundances, sequences and structures, drug activities, gene and metabolic networks are being harvested at laboratories throughout the world. The collected data can be represented by multidimensional feature vectors or by descriptors, which are less formalized, yet still allow one to define similarity relations among objects. Both data representations can be analyzed using data mining and pattern recognition tools. Such tools should allow for interactive, 3-D visual exploration of multidimensional data space by the bio-specialist, rather than for automatic data processing.","PeriodicalId":91181,"journal":{"name":"Visualization : proceedings of the ... IEEE Conference on Visualization. IEEE Conference on Visualization","volume":"26 1","pages":"90"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89484454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual Endoscopy is an interactive exploration inside human organs to detect polyps by combining medical imaging and computer graphics technologies. In order to realize rendering acceleration, we present a novel two-steps visibility algorithm, Enclosure Sphere Based Cell Visibility (ESBCV), which performs visibility computation between cells assisted by Z-buffer in preprocessing and eye-to-cell visibility through a simple numerical calculation of the intersection of a circle with the rendered image in navigation. Experimental results demonstrated virtual navigation with high image quality and interactive rendering speed. CR
{"title":"Enclosure Sphere Based Cell Visibility for Virtual Endoscopy","authors":"Jianfei Liu, Xiaopeng Zhang","doi":"10.1109/vis.2005.26","DOIUrl":"https://doi.org/10.1109/vis.2005.26","url":null,"abstract":"Virtual Endoscopy is an interactive exploration inside human organs to detect polyps by combining medical imaging and computer graphics technologies. In order to realize rendering acceleration, we present a novel two-steps visibility algorithm, Enclosure Sphere Based Cell Visibility (ESBCV), which performs visibility computation between cells assisted by Z-buffer in preprocessing and eye-to-cell visibility through a simple numerical calculation of the intersection of a circle with the rendered image in navigation. Experimental results demonstrated virtual navigation with high image quality and interactive rendering speed. CR","PeriodicalId":91181,"journal":{"name":"Visualization : proceedings of the ... IEEE Conference on Visualization. IEEE Conference on Visualization","volume":"47 1","pages":"98"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90591221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"NASA's Scientific Visualization Studio Image Server","authors":"E. Sokolowsky, H. Mitchell, J. D. L. Beaujardière","doi":"10.1109/VIS.2005.68","DOIUrl":"https://doi.org/10.1109/VIS.2005.68","url":null,"abstract":"","PeriodicalId":91181,"journal":{"name":"Visualization : proceedings of the ... IEEE Conference on Visualization. IEEE Conference on Visualization","volume":"109 1","pages":"103"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77227192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joe Groner, Matthew Lee, Joel P. Martin, R. Moorhead, James Newman
{"title":"A Concurrent Visualization System for High-Performance Computational Simulations","authors":"Joe Groner, Matthew Lee, Joel P. Martin, R. Moorhead, James Newman","doi":"10.1109/VIS.2005.2","DOIUrl":"https://doi.org/10.1109/VIS.2005.2","url":null,"abstract":"","PeriodicalId":91181,"journal":{"name":"Visualization : proceedings of the ... IEEE Conference on Visualization. IEEE Conference on Visualization","volume":"31 1","pages":"115"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78405714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Lefohn, I. Buck, P. McCormick, John Douglas Owens, Timothy J. Purcell, R. Strzodka
{"title":"General Purpose Computation on Graphics Hardware","authors":"A. Lefohn, I. Buck, P. McCormick, John Douglas Owens, Timothy J. Purcell, R. Strzodka","doi":"10.1109/VIS.2005.43","DOIUrl":"https://doi.org/10.1109/VIS.2005.43","url":null,"abstract":"","PeriodicalId":91181,"journal":{"name":"Visualization : proceedings of the ... IEEE Conference on Visualization. IEEE Conference on Visualization","volume":"62 1","pages":"121"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80708512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For some decades, radiation therapy has been proved successful in cancer treatment. The major task of radiation therapy is to impose a maximum dose of radiation to the tumor cells. The IMRT technology makes it possible to deliver radiation more precisely by dividing the accelerator head into smaller units called “beamlets” that can be manipulated independently. This treatment planning requires a time consuming iterative work between a physician and an IMRT technician. The state of the art current technique is to determine the IMRT treatment plan at the beginning and use it without changing it in the course of the treatment. However, the assumption of fixed ‘target volume’ throughout the IMRT treatment, is very limiting given that the tumor shrinks in response to the radiation therapy. In this research, we apply time-varying volume rendering technique to the IMRT treatment and develop a proof-of-concept prototype system that enables capturing and relating the time dependent changes of the irradiated volume throughout the course of treatment. This prototype system will enable the researchers to explore different what-if scenarios, such as determining the ‘Target Treatment Volume’ depending on the delivered dose of radiation.
{"title":"Dynamic Volume Rendering for Intensity Modulated Radiation Therapy (IMRT) Treatment","authors":"Rajarathinam Arangarasan, Sungeun Kim, S. Orçun","doi":"10.1109/VIS.2005.24","DOIUrl":"https://doi.org/10.1109/VIS.2005.24","url":null,"abstract":"For some decades, radiation therapy has been proved successful in cancer treatment. The major task of radiation therapy is to impose a maximum dose of radiation to the tumor cells. The IMRT technology makes it possible to deliver radiation more precisely by dividing the accelerator head into smaller units called “beamlets” that can be manipulated independently. This treatment planning requires a time consuming iterative work between a physician and an IMRT technician. The state of the art current technique is to determine the IMRT treatment plan at the beginning and use it without changing it in the course of the treatment. However, the assumption of fixed ‘target volume’ throughout the IMRT treatment, is very limiting given that the tumor shrinks in response to the radiation therapy. In this research, we apply time-varying volume rendering technique to the IMRT treatment and develop a proof-of-concept prototype system that enables capturing and relating the time dependent changes of the irradiated volume throughout the course of treatment. This prototype system will enable the researchers to explore different what-if scenarios, such as determining the ‘Target Treatment Volume’ depending on the delivered dose of radiation.","PeriodicalId":91181,"journal":{"name":"Visualization : proceedings of the ... IEEE Conference on Visualization. IEEE Conference on Visualization","volume":"11 1","pages":"95"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91189548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}