Ajit Hakke Patil, D. Bernabei, Chaly Collins, Ke Chen, S. Pattanaik, F. Ganovelli
We present a novel technique for physically based rendering of participating media like cloud, smoke, wax, marble, etc. We solve the radiative transfer equation (RTE) for participating media using the Modified Discrete Ordinate Method (MDOM), which computes the final solution as a combination of a direct and an indirect component. We propose a scalable GPU based parallel pipeline, for solving the RTE using the MDOM. This parallel RTE solver is capable of rendering intermediate results such as single scattering approximation. We overcome GPU memory size limitations by using low resolution radiance storage while doing high resolution radiance propagation. Furthermore, we achieve scalability by implementing an efficient volumetric data streaming mechanism. Our results demonstrate the ability of our method to render high quality multiple scattering effects.
{"title":"Parallel MDOM for light transport in participating media","authors":"Ajit Hakke Patil, D. Bernabei, Chaly Collins, Ke Chen, S. Pattanaik, F. Ganovelli","doi":"10.1145/2508244.2508261","DOIUrl":"https://doi.org/10.1145/2508244.2508261","url":null,"abstract":"We present a novel technique for physically based rendering of participating media like cloud, smoke, wax, marble, etc. We solve the radiative transfer equation (RTE) for participating media using the Modified Discrete Ordinate Method (MDOM), which computes the final solution as a combination of a direct and an indirect component. We propose a scalable GPU based parallel pipeline, for solving the RTE using the MDOM. This parallel RTE solver is capable of rendering intermediate results such as single scattering approximation. We overcome GPU memory size limitations by using low resolution radiance storage while doing high resolution radiance propagation. Furthermore, we achieve scalability by implementing an efficient volumetric data streaming mechanism. Our results demonstrate the ability of our method to render high quality multiple scattering effects.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115551924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spatial selections are a ubiquitous concept in visualization. By localizing particular features, they can be analyzed and compared in different views. However, the semantics of such selections are often dependent on other parameter settings and it can be difficult to reconstruct them without additional information. In this paper, we present the concept of contextual snapshots as an effective means for managing spatial selections in visualized data. The selections are automatically associated with the context in which they have been created. Contextual snapshots can be also used as the basis for interactive integrated and linked views, which enable in-place investigation and comparison of multiple visual representations of data. Our approach is implemented as a flexible toolkit with well-defined interfaces for integration into existing systems. We demonstrate the power and generality of our techniques by applying them to several distinct scenarios such as the visualization of simulation data and the analysis of historical documents.
{"title":"Contextual Snapshots: Enriched Visualization with Interactive Spatial Annotations","authors":"P. Mindek, S. Bruckner, E. Gröller","doi":"10.1145/2508244.2508251","DOIUrl":"https://doi.org/10.1145/2508244.2508251","url":null,"abstract":"Spatial selections are a ubiquitous concept in visualization. By localizing particular features, they can be analyzed and compared in different views. However, the semantics of such selections are often dependent on other parameter settings and it can be difficult to reconstruct them without additional information. In this paper, we present the concept of contextual snapshots as an effective means for managing spatial selections in visualized data. The selections are automatically associated with the context in which they have been created. Contextual snapshots can be also used as the basis for interactive integrated and linked views, which enable in-place investigation and comparison of multiple visual representations of data. Our approach is implemented as a flexible toolkit with well-defined interfaces for integration into existing systems. We demonstrate the power and generality of our techniques by applying them to several distinct scenarios such as the visualization of simulation data and the analysis of historical documents.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122821690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Molecular visualization is often challenged with rendering of large sequences of molecular simulations in real time. We introduce a novel approach that enables us to show even large protein complexes over time in real-time. Our method is based on the level-of-detail concept, where we exploit three different molecular surface models, solvent excluded surface (SES), Gaussian kernels and van der Waals spheres combined in one visualization. We introduce three shading levels that correspond to their geometric counterparts and a method for creating seamless transition between these representations. The SES representation with full shading and added contours stands in focus while on the other side a sphere representation with constant shading and without contours provide the context. Moreover, we introduce a methodology to render the entire molecule directly using the A-buffer technique, which further improves the performance. The rendering performance is evaluated on series of molecules of varying atom counts.
{"title":"Seamless Visual Abstraction of Molecular Surfaces","authors":"J. Parulek, T. Ropinski, I. Viola","doi":"10.1145/2508244.2508258","DOIUrl":"https://doi.org/10.1145/2508244.2508258","url":null,"abstract":"Molecular visualization is often challenged with rendering of large sequences of molecular simulations in real time. We introduce a novel approach that enables us to show even large protein complexes over time in real-time. Our method is based on the level-of-detail concept, where we exploit three different molecular surface models, solvent excluded surface (SES), Gaussian kernels and van der Waals spheres combined in one visualization. We introduce three shading levels that correspond to their geometric counterparts and a method for creating seamless transition between these representations. The SES representation with full shading and added contours stands in focus while on the other side a sphere representation with constant shading and without contours provide the context. Moreover, we introduce a methodology to render the entire molecule directly using the A-buffer technique, which further improves the performance. The rendering performance is evaluated on series of molecules of varying atom counts.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127706597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The main goal of Data Mining is the research of relevant information from a huge volume of data. It is generally achieved either by automatic algorithms or by the visual exploration of data. Thanks to algorithms, an exhaustive set of patterns matching specific measures can be found. But the volume of extracted information can be greater than the volume of initial data. Visual Data Mining allows the specialist to focus on a specific area of data that may describe interesting patterns. However, it is often limited by the difficulty to deal with a great number of multi dimensional data. In this paper, we propose to mix an automatic and a manual method, by driving the automatic extraction using a data scatter plot visualization. This visualization affects the number of rules found and their construction. We illustrate our method on two databases. The first describes one month French air traffic and the second stems from 2012 KDD Cup database.
{"title":"From Visualization to Association Rules: an automatic approach","authors":"Gwenael Bothorel, M. Serrurier, C. Hurter","doi":"10.1145/2508244.2508252","DOIUrl":"https://doi.org/10.1145/2508244.2508252","url":null,"abstract":"The main goal of Data Mining is the research of relevant information from a huge volume of data. It is generally achieved either by automatic algorithms or by the visual exploration of data. Thanks to algorithms, an exhaustive set of patterns matching specific measures can be found. But the volume of extracted information can be greater than the volume of initial data. Visual Data Mining allows the specialist to focus on a specific area of data that may describe interesting patterns. However, it is often limited by the difficulty to deal with a great number of multi dimensional data. In this paper, we propose to mix an automatic and a manual method, by driving the automatic extraction using a data scatter plot visualization. This visualization affects the number of rules found and their construction. We illustrate our method on two databases. The first describes one month French air traffic and the second stems from 2012 KDD Cup database.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131518296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Åsmund Birkeland, D. M. Ulvang, K. Nylund, T. Hausken, O. Gilja, I. Viola
Blood flow is a very important part of human physiology. In this paper, we present a new method for estimating and visualizing 3D blood flow on-the-fly based on Doppler ultrasound. We add semantic information about the geometry of the blood vessels in order to recreate the actual velocities of the blood. Assuming a laminar flow, the flow direction is related to the general direction of the vessel. Based on the center line of the vessel, we create a vector field representing the direction of the vessel at any given point. The actual flow velocity is then estimated from the Doppler ultrasound signal by back-projecting the velocity in the measured direction, onto the vessel direction. Additionally, we estimate the flux at user-selected cross-sections of the vessel by integrating the velocities over the area of the cross-section. In order to visualize the flow and the flux, we propose a visualization design based on traced particles colored by the flux. The velocities are visualized by animating particles in the flow field. Further, we propose a novel particle velocity legend as a means for the user to estimate the numerical value of the current velocity. Finally, we perform an evaluation of the technique where the accuracy of the velocity estimation is measured using a 4D MRI dataset as a basis for the ground truth.
{"title":"Doppler-based 3D Blood Flow Imaging and Visualization","authors":"Åsmund Birkeland, D. M. Ulvang, K. Nylund, T. Hausken, O. Gilja, I. Viola","doi":"10.1145/2508244.2508259","DOIUrl":"https://doi.org/10.1145/2508244.2508259","url":null,"abstract":"Blood flow is a very important part of human physiology. In this paper, we present a new method for estimating and visualizing 3D blood flow on-the-fly based on Doppler ultrasound. We add semantic information about the geometry of the blood vessels in order to recreate the actual velocities of the blood. Assuming a laminar flow, the flow direction is related to the general direction of the vessel. Based on the center line of the vessel, we create a vector field representing the direction of the vessel at any given point. The actual flow velocity is then estimated from the Doppler ultrasound signal by back-projecting the velocity in the measured direction, onto the vessel direction. Additionally, we estimate the flux at user-selected cross-sections of the vessel by integrating the velocities over the area of the cross-section.\u0000 In order to visualize the flow and the flux, we propose a visualization design based on traced particles colored by the flux. The velocities are visualized by animating particles in the flow field. Further, we propose a novel particle velocity legend as a means for the user to estimate the numerical value of the current velocity. Finally, we perform an evaluation of the technique where the accuracy of the velocity estimation is measured using a 4D MRI dataset as a basis for the ground truth.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115066126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bernhard Sadransky, Hrvoje Ribicic, Robert Carnecky, J. Waser
The power of fast flooding simulations has not yet been used to enhance on-site decision making. In this paper, we present the case study of a mobile user interface which provides remote access to a simulation-powered decision support system. The proposed interface allows users to explore multiple alternative flooding scenarios directly on-site. Different views are used for this purpose. Scenario and time navigation is done through a temporal view, while a spatial view is used to navigate through space via a 3D rendering of a scenario. Using the touch-sensitive mobile device, the user can create alternative scenarios by sketching changes directly onto the rendering. The mobile interface acts as a thin client in a distributed environment where the server performs simulation and rendering. The approach was presented to a group of domain experts in the field of hydrology, who consider it to be a useful step forward. Further tests were done with a simulation engineer who considers the interface to be intuitive and useful.
{"title":"Visdom Mobile: Decision Support On-site Using Visual Simulation Control","authors":"Bernhard Sadransky, Hrvoje Ribicic, Robert Carnecky, J. Waser","doi":"10.1145/2508244.2508257","DOIUrl":"https://doi.org/10.1145/2508244.2508257","url":null,"abstract":"The power of fast flooding simulations has not yet been used to enhance on-site decision making. In this paper, we present the case study of a mobile user interface which provides remote access to a simulation-powered decision support system. The proposed interface allows users to explore multiple alternative flooding scenarios directly on-site. Different views are used for this purpose. Scenario and time navigation is done through a temporal view, while a spatial view is used to navigate through space via a 3D rendering of a scenario. Using the touch-sensitive mobile device, the user can create alternative scenarios by sketching changes directly onto the rendering. The mobile interface acts as a thin client in a distributed environment where the server performs simulation and rendering. The approach was presented to a group of domain experts in the field of hydrology, who consider it to be a useful step forward. Further tests were done with a simulation engineer who considers the interface to be intuitive and useful.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134071966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents and investigates methods for fast and accurate illumination of scenes containing many light sources that have limited spatial influence, e.g. point light sources. For speeding up the computation, current graphics applications use an assumption that the light sources range can be limited using bounding spheres due to their limited spatial influence and illumination is computed only if a surface lies within the sphere. Therefore, we explore the differences in illumination between scenes illuminated with spatially limited light sources and physically more correct computation where the light radius is infinite. We show that the difference can be small if we add appropriate ambient lighting. The contribution of the paper is the method for fast estimation of ambient lighting in scenes illuminated by numerous light sources. We also propose a method for elimination of color discontinuities at the edges of the bounding spheres. Our solution is tested on two different scenes: a procedurally generated city and the Sibenik cathedral. Our approach allows for correct lighting computation in scenes with numerous light sources without any modification of the scene graph or other data structures or rendering procedures. It thus can be applied in various systems without any structural modifications.
{"title":"Improved Computation of Attenuated Light with Application in Scenes with Many Light Sources","authors":"T. Milet, J. Navrátil, A. Herout, P. Zemčík","doi":"10.1145/2508244.2508262","DOIUrl":"https://doi.org/10.1145/2508244.2508262","url":null,"abstract":"This paper presents and investigates methods for fast and accurate illumination of scenes containing many light sources that have limited spatial influence, e.g. point light sources. For speeding up the computation, current graphics applications use an assumption that the light sources range can be limited using bounding spheres due to their limited spatial influence and illumination is computed only if a surface lies within the sphere. Therefore, we explore the differences in illumination between scenes illuminated with spatially limited light sources and physically more correct computation where the light radius is infinite. We show that the difference can be small if we add appropriate ambient lighting. The contribution of the paper is the method for fast estimation of ambient lighting in scenes illuminated by numerous light sources. We also propose a method for elimination of color discontinuities at the edges of the bounding spheres. Our solution is tested on two different scenes: a procedurally generated city and the Sibenik cathedral. Our approach allows for correct lighting computation in scenes with numerous light sources without any modification of the scene graph or other data structures or rendering procedures. It thus can be applied in various systems without any structural modifications.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129793122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software visualization has been in focus of researchers for several decades. Although many interesting software visualization systems have been developed very few have managed to become part of common development process. In this paper we focus on program runtime visualization and present our visualization system that interactively presents 3D visualizations visually similar to UML sequence diagrams and we show several example visualizations.
{"title":"Visualizing dynamics of object oriented programs with time context","authors":"Filip Grznár, Peter Kapec","doi":"10.1145/2508244.2508253","DOIUrl":"https://doi.org/10.1145/2508244.2508253","url":null,"abstract":"Software visualization has been in focus of researchers for several decades. Although many interesting software visualization systems have been developed very few have managed to become part of common development process. In this paper we focus on program runtime visualization and present our visualization system that interactively presents 3D visualizations visually similar to UML sequence diagrams and we show several example visualizations.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127729640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Z. Horváth, A. Herout, Istvan Szentandrasi, Michal Zacharias
A major limitation of contemporary fiduciary markers is that they are either very small (they try to represent a single point in the space) or they must be planar in order to be reasonably detectable. A deformable large-scale marker or marker field that would be efficiently detectable is the objective of this work. We propose a design of such a marker field -- the Honeycomb Marker Field. It is composed of symmetric hexagons, whose triplets of modules meet at "Y-junctions". We present an efficient detector of these image features -- the Y-junctions. Thanks to the specific appearance of these synthetic image features, the algorithm can be very efficient -- it only visits a small fraction of the image pixels in order to detect the Y-junctions reliably. The experiments show that compared to a general feature point detector (FAST was tested), the specialized Y-junctions detector offers better detection reliability.
{"title":"Design and Detection of Local Geometric Features for Deformable Marker Fields","authors":"Z. Horváth, A. Herout, Istvan Szentandrasi, Michal Zacharias","doi":"10.1145/2508244.2508254","DOIUrl":"https://doi.org/10.1145/2508244.2508254","url":null,"abstract":"A major limitation of contemporary fiduciary markers is that they are either very small (they try to represent a single point in the space) or they must be planar in order to be reasonably detectable. A deformable large-scale marker or marker field that would be efficiently detectable is the objective of this work.\u0000 We propose a design of such a marker field -- the Honeycomb Marker Field. It is composed of symmetric hexagons, whose triplets of modules meet at \"Y-junctions\". We present an efficient detector of these image features -- the Y-junctions. Thanks to the specific appearance of these synthetic image features, the algorithm can be very efficient -- it only visits a small fraction of the image pixels in order to detect the Y-junctions reliably. The experiments show that compared to a general feature point detector (FAST was tested), the specialized Y-junctions detector offers better detection reliability.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127258615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A simple fast approach for dynamic texture synthesize that realistically matches given color or multispectral texture appearance and respects its original optic flow is presented. The method generalizes the prominent static double toroid-shaped texture modeling method to the dynamic texture synthesis domain. The analytical part of the method is based on optimal overlapping tiling and subsequent minimum boundary cut. The optimal toroid-shaped dynamic texture patches are created in each spatial and time dimension, respectively. The time dimension tile border is derived from the optical flow of the modeled texture. The toroid-shaped tiles are created in the analytical step which is completely separated from the synthesis part. Thus the presented method is extremely fast and capable to enlarge a learned natural dynamic texture spatially and temporally in real-time.
{"title":"Dynamic Texture Enlargement","authors":"M. Haindl, Radek Richtr","doi":"10.1145/2508244.2508245","DOIUrl":"https://doi.org/10.1145/2508244.2508245","url":null,"abstract":"A simple fast approach for dynamic texture synthesize that realistically matches given color or multispectral texture appearance and respects its original optic flow is presented. The method generalizes the prominent static double toroid-shaped texture modeling method to the dynamic texture synthesis domain. The analytical part of the method is based on optimal overlapping tiling and subsequent minimum boundary cut. The optimal toroid-shaped dynamic texture patches are created in each spatial and time dimension, respectively. The time dimension tile border is derived from the optical flow of the modeled texture. The toroid-shaped tiles are created in the analytical step which is completely separated from the synthesis part. Thus the presented method is extremely fast and capable to enlarge a learned natural dynamic texture spatially and temporally in real-time.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116454484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}