M. Meissner, K. Zuiderveld, G. Harris, J. Lesser, M. Vannier
The objective of this panel is to reflect on the advances that volume rendering has brought to the medical community and, even more important, to discuss its current short-comings and future needs. This direct feedback from the medical community will hopefully inspire the IEEE Visualization audience and help to focus on new research areas that will further advance the state of the art in medical visualization. The panel assembles end-users from well-known medical facilities and research institutions who have a background in visualization but primarily are experts using this technology for practical diagnostic applications.
{"title":"End Users' Perspectives on Volume Rendering in Medical Imaging: A job well done or not over yet?","authors":"M. Meissner, K. Zuiderveld, G. Harris, J. Lesser, M. Vannier","doi":"10.1109/VIS.2005.27","DOIUrl":"https://doi.org/10.1109/VIS.2005.27","url":null,"abstract":"The objective of this panel is to reflect on the advances that volume rendering has brought to the medical community and, even more important, to discuss its current short-comings and future needs. This direct feedback from the medical community will hopefully inspire the IEEE Visualization audience and help to focus on new research areas that will further advance the state of the art in medical visualization. The panel assembles end-users from well-known medical facilities and research institutions who have a background in visualization but primarily are experts using this technology for practical diagnostic applications.","PeriodicalId":91181,"journal":{"name":"Visualization : proceedings of the ... IEEE Conference on Visualization. IEEE Conference on Visualization","volume":"25 1","pages":"119"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83299330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Figure 1: Eleven different visualization methods that represent the same continuous scalar dataset. We are characterizing the effectiveness of each one of these methods, both individually and in combination, to represent scalar datasets in 2D. We present the results from a pilot study that evaluates the effectiveness of 2D visualization methods in terms of a set of design factors, which are subjectively rated by expert visual designers. In collaboration with educators from the Illustration Department at the Rhode Island School of Design (RISD), we have defined a space of visualization methods using basic visual elements including icon hue, icon size, icon density, and background saturation (see Figure 1). In this initial pilot study we presented our subjects with single variable visualization methods. The results characterize the effectiveness of individual visual elements according to our design factors. We are beginning to test these results by creating two-variable visualizations and studying how the different visual elements interact. 1 INTRODUCTION Given the increasing capacity of scientists to acquire or calculate multival-ued datasets, creating effective visualizations for understanding and correlating these data is imperative. However, modeling the space of possible vi-sualization methods for a given scientific problem has challenged computer scientists, statisticians, and cognitive scientists for many years [1,2,3,4]; it is still an open challenge. Our goal is to provide scientists with visualization methods that convey information by optimizing the design of the images to facilitate perception and comprehension. We created a framework for evaluating these visualization methods through feedback from expert visual designers and art educators. Our framework mimics the art education process, in which art educators impart artistic and visual design knowledge to their students through critiques of the students' work.We established a set of factors that characterize the effectiveness of a visualization method in displaying scientific data. These factors include constraints implied by the dataset, such as the relative importance of the different data variables or the minimum feature size present in the data. We also include design, artistic, and perceptual factors, such as time required to understand the visualization, or how visually linear is the mapping between data and visual element across the image. We will describe these in detail in section 2. Evaluating the effectiveness of visualizations is difficult because tests to evaluate them meaningfully are hard to design and execute [5]. We have researched this issue previously in two user studies comparing 2D vector visualization methods. The first …
{"title":"Using Visual Design Expertise to Characterize the Effectiveness of 2D Scientific Visualization Methods","authors":"D. Feliz, D. Laidlaw, Fritz Drury","doi":"10.1109/VIS.2005.109","DOIUrl":"https://doi.org/10.1109/VIS.2005.109","url":null,"abstract":"Figure 1: Eleven different visualization methods that represent the same continuous scalar dataset. We are characterizing the effectiveness of each one of these methods, both individually and in combination, to represent scalar datasets in 2D. We present the results from a pilot study that evaluates the effectiveness of 2D visualization methods in terms of a set of design factors, which are subjectively rated by expert visual designers. In collaboration with educators from the Illustration Department at the Rhode Island School of Design (RISD), we have defined a space of visualization methods using basic visual elements including icon hue, icon size, icon density, and background saturation (see Figure 1). In this initial pilot study we presented our subjects with single variable visualization methods. The results characterize the effectiveness of individual visual elements according to our design factors. We are beginning to test these results by creating two-variable visualizations and studying how the different visual elements interact. 1 INTRODUCTION Given the increasing capacity of scientists to acquire or calculate multival-ued datasets, creating effective visualizations for understanding and correlating these data is imperative. However, modeling the space of possible vi-sualization methods for a given scientific problem has challenged computer scientists, statisticians, and cognitive scientists for many years [1,2,3,4]; it is still an open challenge. Our goal is to provide scientists with visualization methods that convey information by optimizing the design of the images to facilitate perception and comprehension. We created a framework for evaluating these visualization methods through feedback from expert visual designers and art educators. Our framework mimics the art education process, in which art educators impart artistic and visual design knowledge to their students through critiques of the students' work.We established a set of factors that characterize the effectiveness of a visualization method in displaying scientific data. These factors include constraints implied by the dataset, such as the relative importance of the different data variables or the minimum feature size present in the data. We also include design, artistic, and perceptual factors, such as time required to understand the visualization, or how visually linear is the mapping between data and visual element across the image. We will describe these in detail in section 2. Evaluating the effectiveness of visualizations is difficult because tests to evaluate them meaningfully are hard to design and execute [5]. We have researched this issue previously in two user studies comparing 2D vector visualization methods. The first …","PeriodicalId":91181,"journal":{"name":"Visualization : proceedings of the ... IEEE Conference on Visualization. IEEE Conference on Visualization","volume":"9 1","pages":"101"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88547484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents our framework for music information retrieval and visualization (CoMIRVA). We focus on the functions for visualizing similarities between music artists or songs and describe some approaches we have already implemented. In particular, we present a novel three-dimensional visualization technique based on a geographic model, the very simple “Circled Bars” visualization which could be used for example for mobile devices, and a graphbased visualization approach for prototypical artists.
{"title":"Interactive Poster: Using CoMIRVA for Visualizing Similarities Between Music Artists","authors":"M. Schedl, Peter Knees, G. Widmer","doi":"10.1109/VIS.2005.60","DOIUrl":"https://doi.org/10.1109/VIS.2005.60","url":null,"abstract":"This paper presents our framework for music information retrieval and visualization (CoMIRVA). We focus on the functions for visualizing similarities between music artists or songs and describe some approaches we have already implemented. In particular, we present a novel three-dimensional visualization technique based on a geographic model, the very simple “Circled Bars” visualization which could be used for example for mobile devices, and a graphbased visualization approach for prototypical artists.","PeriodicalId":91181,"journal":{"name":"Visualization : proceedings of the ... IEEE Conference on Visualization. IEEE Conference on Visualization","volume":"47 1","pages":"89"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87910961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a technique for generating particles from user-specified transfer function for an effective point-based volume rendering. In general, a volume rendering technique utilizes an illumination model in which the 3D scalar field is characterized as a varying density emitter with a single level of scattering. This model is related to a particle system in which the particles are sufficiently small and of low albedo. A conventional volume rendering technique models the density of particles, not the particles themselves [1]. The density is defined by specifying a transfer function from a scalar data value to an opacity data value. Thus, a given scalar field is described as a continuous semitransparent gel and the accumulating order is important. This results in a considerable computational overhead. On the other hand, our rendering technique represents the 3D scalar fields as a set of particles. The particle density is derived from a userspecified transfer function, and describes the probability that a particle is present at the point. Since the particles can be considered as fully opaque, no alpha blending but only depth comparison is required during the rendering calculation, which is advantageous in the distributed processing.
{"title":"Particle Generation from User-specified Transfer Function for Point-based Volume Rendering","authors":"Naohisa Sakamoto, K. Koyamada","doi":"10.1109/VIS.2005.76","DOIUrl":"https://doi.org/10.1109/VIS.2005.76","url":null,"abstract":"In this paper, we propose a technique for generating particles from user-specified transfer function for an effective point-based volume rendering. In general, a volume rendering technique utilizes an illumination model in which the 3D scalar field is characterized as a varying density emitter with a single level of scattering. This model is related to a particle system in which the particles are sufficiently small and of low albedo. A conventional volume rendering technique models the density of particles, not the particles themselves [1]. The density is defined by specifying a transfer function from a scalar data value to an opacity data value. Thus, a given scalar field is described as a continuous semitransparent gel and the accumulating order is important. This results in a considerable computational overhead. On the other hand, our rendering technique represents the 3D scalar fields as a set of particles. The particle density is derived from a userspecified transfer function, and describes the probability that a particle is present at the point. Since the particles can be considered as fully opaque, no alpha blending but only depth comparison is required during the rendering calculation, which is advantageous in the distributed processing.","PeriodicalId":91181,"journal":{"name":"Visualization : proceedings of the ... IEEE Conference on Visualization. IEEE Conference on Visualization","volume":"73 1","pages":"108"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86273031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visualization of temporal and spatial tensor data is a challenging task due to the large amount of multi-dimensional data. In most of the visualization, scientists are interested in finding certain defects, anomalies, or correlations while exploring data. Hence, visualization requires efficient exploration and representation techniques. In order to use the nematic liquid crystal (NLC) as a biosensor, scientists need to study and explore simulations for understanding the relationship between topological defects and the biological specimen. To solve the above problem, we merge scientific and information visualization techniques to create a controlled exploration environment. System enables a user to filter and explore NLC data sets for orientation defects. We introduce a three level visualization approach for exploring tensor data sets using timeline, parallel coordinate, and glyph based visualization. Visualization helps in reducing unnecessary data at each stage and focus on the relevant ones. This abstract discusses the goal, approach and various research issues found in the design of the NLC data visualization system.
{"title":"Exploring Defects in Nematic Liquid Crystals","authors":"Ketan Mehta, Matthew Lee, T. Jankun-Kelly","doi":"10.1109/VIS.2005.34","DOIUrl":"https://doi.org/10.1109/VIS.2005.34","url":null,"abstract":"Visualization of temporal and spatial tensor data is a challenging task due to the large amount of multi-dimensional data. In most of the visualization, scientists are interested in finding certain defects, anomalies, or correlations while exploring data. Hence, visualization requires efficient exploration and representation techniques. In order to use the nematic liquid crystal (NLC) as a biosensor, scientists need to study and explore simulations for understanding the relationship between topological defects and the biological specimen. To solve the above problem, we merge scientific and information visualization techniques to create a controlled exploration environment. System enables a user to filter and explore NLC data sets for orientation defects. We introduce a three level visualization approach for exploring tensor data sets using timeline, parallel coordinate, and glyph based visualization. Visualization helps in reducing unnecessary data at each stage and focus on the relevant ones. This abstract discusses the goal, approach and various research issues found in the design of the NLC data visualization system.","PeriodicalId":91181,"journal":{"name":"Visualization : proceedings of the ... IEEE Conference on Visualization. IEEE Conference on Visualization","volume":"94 1","pages":"91"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72417124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Aguilera, T. Carrard, G. C. D. Verdière, J. Nominé
{"title":"Visualizing Large Scale Laser-Plasma Interaction 3D Simulations Using Parallel VTK and Extensions on Clusters","authors":"D. Aguilera, T. Carrard, G. C. D. Verdière, J. Nominé","doi":"10.1109/VIS.2005.127","DOIUrl":"https://doi.org/10.1109/VIS.2005.127","url":null,"abstract":"","PeriodicalId":91181,"journal":{"name":"Visualization : proceedings of the ... IEEE Conference on Visualization. IEEE Conference on Visualization","volume":"255 ","pages":"111"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72506443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Woodward, D. Porter, Michael R. Knox, S. Andringa, Alex J. Larson, Aaron Stender
{"title":"A System for Interactive Volume Visualization on the PowerWall","authors":"P. Woodward, D. Porter, Michael R. Knox, S. Andringa, Alex J. Larson, Aaron Stender","doi":"10.1109/VIS.2005.7","DOIUrl":"https://doi.org/10.1109/VIS.2005.7","url":null,"abstract":"","PeriodicalId":91181,"journal":{"name":"Visualization : proceedings of the ... IEEE Conference on Visualization. IEEE Conference on Visualization","volume":"28 1","pages":"110"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78505783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chang S. Kim, Jinah Kim, H. Lim, K. Parks, Jinah Park
{"title":"Scientific Visualization of time-Varying Oceanographic and Meteorological Data using VR","authors":"Chang S. Kim, Jinah Kim, H. Lim, K. Parks, Jinah Park","doi":"10.1109/VIS.2005.89","DOIUrl":"https://doi.org/10.1109/VIS.2005.89","url":null,"abstract":"","PeriodicalId":91181,"journal":{"name":"Visualization : proceedings of the ... IEEE Conference on Visualization. IEEE Conference on Visualization","volume":"1 1","pages":"97"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72524864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The effectiveness of the areal contact versus the point contact was experimented. We created virtual 3D cubic volumetric objects consisting of approximately 500,000 nodes. The object is placed on a plane so that the nodes at the bottom of the object are constrained. A user can interact with the object by pushing and pulling at the top surface of the object as shown in Figure 2. Figure 2 (a) and (b) are the configurations of pulling up, and pushing down, respectively, in the middle of the top surface with the haptic interface. The square-shaped areal contact was made with the tactile display unit attached to the gimbals of Phantom TM haptic device. For a point-based contact, only the Phantom haptic device was used for interaction with the virtual object. We constructed two volumetric soft objects where four hard blocks, which represents tumors, are placed inside of each soft volume as illustrated in Figure 3. Figure 4 shows the top view of the test volume objects. The test object (a) is used for the experiment with the point haptic feedback only, while the test object (b) is used for the experiment with the area-based haptic feedback with the augmented tactile display. We asked 20 human subjects to explore the objects with the haptic interface device, one without the tactile display unit and the other with the tactile display unit. Their task was to locate the hard portions (i.e., tumors) inside the volume, and they were asked to draw the tumors they found on a piece of paper. Each subject drew what he/she visualized the tumor’s location and size solely with the touch feedback. All subjects did not have a pre-knowledge of the number of tumors that they can find. Figure 5 shows the representative drawings done by the subjects. We can observe that the area-based haptic interface gave a superior results compare to the one with the point-based interface. With the point-based interface, most subjects missed the tumor #4 which is relatively small in size. However, all of them were detected with the area-based haptic interface. In palpation, it is important to find not only the number of tumors but also a precise location and the size of the tumors. We defined the accuracy measures concerning the center location of the tumor and the actual size of the tumor. Table 2 and 3 shows the average errors computed against the accuracy measures. Although there seems to be an intrinsic source of error due to human perception, our results clearly demonstrates that areal haptic feedback provides a better visualization of the object.
{"title":"Evaluation of Areal Touch Feedback for Palpation Simulation","authors":"Jinah Park, Sang-Youn Kim, Ki-Uk Kyung, D. Kwon","doi":"10.1109/VIS.2005.28","DOIUrl":"https://doi.org/10.1109/VIS.2005.28","url":null,"abstract":"The effectiveness of the areal contact versus the point contact was experimented. We created virtual 3D cubic volumetric objects consisting of approximately 500,000 nodes. The object is placed on a plane so that the nodes at the bottom of the object are constrained. A user can interact with the object by pushing and pulling at the top surface of the object as shown in Figure 2. Figure 2 (a) and (b) are the configurations of pulling up, and pushing down, respectively, in the middle of the top surface with the haptic interface. The square-shaped areal contact was made with the tactile display unit attached to the gimbals of Phantom TM haptic device. For a point-based contact, only the Phantom haptic device was used for interaction with the virtual object. We constructed two volumetric soft objects where four hard blocks, which represents tumors, are placed inside of each soft volume as illustrated in Figure 3. Figure 4 shows the top view of the test volume objects. The test object (a) is used for the experiment with the point haptic feedback only, while the test object (b) is used for the experiment with the area-based haptic feedback with the augmented tactile display. We asked 20 human subjects to explore the objects with the haptic interface device, one without the tactile display unit and the other with the tactile display unit. Their task was to locate the hard portions (i.e., tumors) inside the volume, and they were asked to draw the tumors they found on a piece of paper. Each subject drew what he/she visualized the tumor’s location and size solely with the touch feedback. All subjects did not have a pre-knowledge of the number of tumors that they can find. Figure 5 shows the representative drawings done by the subjects. We can observe that the area-based haptic interface gave a superior results compare to the one with the point-based interface. With the point-based interface, most subjects missed the tumor #4 which is relatively small in size. However, all of them were detected with the area-based haptic interface. In palpation, it is important to find not only the number of tumors but also a precise location and the size of the tumors. We defined the accuracy measures concerning the center location of the tumor and the actual size of the tumor. Table 2 and 3 shows the average errors computed against the accuracy measures. Although there seems to be an intrinsic source of error due to human perception, our results clearly demonstrates that areal haptic feedback provides a better visualization of the object.","PeriodicalId":91181,"journal":{"name":"Visualization : proceedings of the ... IEEE Conference on Visualization. IEEE Conference on Visualization","volume":"27 1","pages":"100"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77913184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Hardenbergh, B. Buchbinder, S. Thurston, Jonathan W. Lombardi, G. Harris
Figure 1. This overview image displays the 3D streamlines representing the DTI tracks, several functional areas, the vasculature and the underlying anatomy clipped at an oblique angle. The focus of this poster is the addition of the track geometry to an existing fMRI visualization application. On the right we can see the cortical spinal track (cyan) projecting from the left hand motor region (green).
{"title":"Integrated 3D Visualization of fMRI and DTI tractography","authors":"J. Hardenbergh, B. Buchbinder, S. Thurston, Jonathan W. Lombardi, G. Harris","doi":"10.1109/VIS.2005.58","DOIUrl":"https://doi.org/10.1109/VIS.2005.58","url":null,"abstract":"Figure 1. This overview image displays the 3D streamlines representing the DTI tracks, several functional areas, the vasculature and the underlying anatomy clipped at an oblique angle. The focus of this poster is the addition of the track geometry to an existing fMRI visualization application. On the right we can see the cortical spinal track (cyan) projecting from the left hand motor region (green).","PeriodicalId":91181,"journal":{"name":"Visualization : proceedings of the ... IEEE Conference on Visualization. IEEE Conference on Visualization","volume":"142 1","pages":"94"},"PeriodicalIF":0.0,"publicationDate":"2005-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77893991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}