We present a new real-time importance sampling algorithm for environment maps. Our method is based on representing environment maps using kd-tree structures, and generating samples with a single data lookup. An efficient algorithm has been developed for real-time image-based lighting applications. In this paper, we compared our algorithm with Inversion method [Fishman 1996]. We show that our proposed algorithm provides compactness and speedup as compared to Inversion method. Based on a number of rendered images, we have demonstrated that in a fixed time frame the proposed algorithm produces images with a lower noise than that of the Inversion method. We also demonstrate that our algorithm can successfully represent a wide range of material types.
{"title":"Real-time kd-tree based importance sampling of environment maps","authors":"Serkan Ergun, Murat Kurt, A. Öztürk","doi":"10.1145/2448531.2448541","DOIUrl":"https://doi.org/10.1145/2448531.2448541","url":null,"abstract":"We present a new real-time importance sampling algorithm for environment maps. Our method is based on representing environment maps using kd-tree structures, and generating samples with a single data lookup. An efficient algorithm has been developed for real-time image-based lighting applications. In this paper, we compared our algorithm with Inversion method [Fishman 1996]. We show that our proposed algorithm provides compactness and speedup as compared to Inversion method. Based on a number of rendered images, we have demonstrated that in a fixed time frame the proposed algorithm produces images with a lower noise than that of the Inversion method. We also demonstrate that our algorithm can successfully represent a wide range of material types.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127781896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Creating content is a vital task in computer graphics. In this paper we evaluate a constraint based scene description using a system of multiple-agents known from artificial intelligence. By using agents we separate the modeling process into small and easy to understand tasks. The parameters for each agent can be changed at any time. Re-evaluating the agent system results in a consistently updated scene, a process that allows artists to experiment until they find the desired result while still leveraging the power of constraint based modelling. Since we only need to evaluate modified agents when updating the scene, we can even use this description to perform modeling tasks on mobile devices.
{"title":"Using multi-agent systems for constraint-based modeling","authors":"F. Bauer, M. Stamminger","doi":"10.1145/2448531.2448543","DOIUrl":"https://doi.org/10.1145/2448531.2448543","url":null,"abstract":"Creating content is a vital task in computer graphics. In this paper we evaluate a constraint based scene description using a system of multiple-agents known from artificial intelligence. By using agents we separate the modeling process into small and easy to understand tasks. The parameters for each agent can be changed at any time. Re-evaluating the agent system results in a consistently updated scene, a process that allows artists to experiment until they find the desired result while still leveraging the power of constraint based modelling. Since we only need to evaluate modified agents when updating the scene, we can even use this description to perform modeling tasks on mobile devices.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125353759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present design principles for cutaway visualizations that emphasize shape and depth communication of the focus features and their relation to the context. First, to eliminate cutaway-flatness we argue that the cutaway axis should have an angular offset from the view direction. Second, we recommend creating a box-shaped cutaway. Such a simple cutaway shape allows for easier context extrapolation in the cutaway volume. Third, to improve the relationship between the focus features and the context, we propose to selectively align the cutaway shape to familiar structures in the context. Fourth, we emphasize that the illumination model should effectively communicate the shape and spatial ordering inside the cutaway, through shadowing as well as contouring and other stylized shading models. Finally, we recommend relaxing the view-dependency constraint of the cutaway to improve the depth perception through the motion parallax. We have identified these design principles while developing interactive cutaway visualizations of 3D geological models, inspired by geological illustrations and discussions with the domain illustrators and experts.
{"title":"Design principles for cutaway visualization of geological models","authors":"Endre M. Lidal, H. Hauser, I. Viola","doi":"10.1145/2448531.2448537","DOIUrl":"https://doi.org/10.1145/2448531.2448537","url":null,"abstract":"In this paper, we present design principles for cutaway visualizations that emphasize shape and depth communication of the focus features and their relation to the context. First, to eliminate cutaway-flatness we argue that the cutaway axis should have an angular offset from the view direction. Second, we recommend creating a box-shaped cutaway. Such a simple cutaway shape allows for easier context extrapolation in the cutaway volume. Third, to improve the relationship between the focus features and the context, we propose to selectively align the cutaway shape to familiar structures in the context. Fourth, we emphasize that the illumination model should effectively communicate the shape and spatial ordering inside the cutaway, through shadowing as well as contouring and other stylized shading models. Finally, we recommend relaxing the view-dependency constraint of the cutaway to improve the depth perception through the motion parallax. We have identified these design principles while developing interactive cutaway visualizations of 3D geological models, inspired by geological illustrations and discussions with the domain illustrators and experts.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125625459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Veronika Soltészová, Ruben Patel, H. Hauser, I. Viola
Current visualization technology implemented in the software for 2D sonars used in marine research is limited to slicing whilst volume visualization is only possible as post processing. We designed and implemented a system which allows for instantaneous volume visualization of streamed scans from 2D sonars without prior resampling to a voxel grid. The volume is formed by a set of most recent scans which are being stored. We transform each scan using its associated transformations to the view-space and slice their bounding box by view-aligned planes. Each slicing plane is reconstructed from the underlying scans and directly used for slice-based volume rendering. We integrated a low frequency illumination model which enhances the depth perception of noisy acoustic measurements. While we visualize the 2D data and time as 3D volumes, the temporal dimension is not intuitively communicated. Therefore, we introduce a concept of temporal outlines. Our system is a result of an interdisciplinary collaboration between visualization and marine scientists. The application of our system was evaluated by independent domain experts who were not involved in the design process in order to determine real life applicability.
{"title":"Stylized volume visualization of streamed sonar data","authors":"Veronika Soltészová, Ruben Patel, H. Hauser, I. Viola","doi":"10.1145/2448531.2448532","DOIUrl":"https://doi.org/10.1145/2448531.2448532","url":null,"abstract":"Current visualization technology implemented in the software for 2D sonars used in marine research is limited to slicing whilst volume visualization is only possible as post processing. We designed and implemented a system which allows for instantaneous volume visualization of streamed scans from 2D sonars without prior resampling to a voxel grid. The volume is formed by a set of most recent scans which are being stored. We transform each scan using its associated transformations to the view-space and slice their bounding box by view-aligned planes. Each slicing plane is reconstructed from the underlying scans and directly used for slice-based volume rendering. We integrated a low frequency illumination model which enhances the depth perception of noisy acoustic measurements. While we visualize the 2D data and time as 3D volumes, the temporal dimension is not intuitively communicated. Therefore, we introduce a concept of temporal outlines. Our system is a result of an interdisciplinary collaboration between visualization and marine scientists. The application of our system was evaluated by independent domain experts who were not involved in the design process in order to determine real life applicability.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134563660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since its introduction the watershed transform became a popular method for volume data segmentation. A range of various algorithms for its computation were developed, including parallel algorithms for computation on different architectures. Recently also algorithms for consumer graphical accelerators were developed. Neither of these, however, are able to process data larger than the available memory as the whole data has to be present in the memory of the device. In this paper we present two versions of a streamed multi-pass algorithm for watershed computation on a GPU. As the slice-based streaming approach is used both variants are capable of processing data exceeding the size of the available graphics accelerator memory.
{"title":"Streamed watershed transform on GPU for processing of large volume data","authors":"M. Hucko, M. Srámek","doi":"10.1145/2448531.2448549","DOIUrl":"https://doi.org/10.1145/2448531.2448549","url":null,"abstract":"Since its introduction the watershed transform became a popular method for volume data segmentation. A range of various algorithms for its computation were developed, including parallel algorithms for computation on different architectures. Recently also algorithms for consumer graphical accelerators were developed. Neither of these, however, are able to process data larger than the available memory as the whole data has to be present in the memory of the device. In this paper we present two versions of a streamed multi-pass algorithm for watershed computation on a GPU. As the slice-based streaming approach is used both variants are capable of processing data exceeding the size of the available graphics accelerator memory.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"373 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125635694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Laursen, H. Ólafsdóttir, J. A. Bærentzen, M. Hansen, B. Ersbøll
Rendering tomographic data sets is a computationally expensive task, and often accomplished using hardware acceleration. The data sets are usually anisotropic as a result of the process used to acquire them. A vital part of rendering them is the conversion of the discrete signal back into a continuous one, via interpolation. On graphics hardware, this is often achieved via simple linear interpolation. We present a novel approach for real-time anisotropic volume data interpolation on a graphics processing unit and draw comparisons to standardized interpolation alternatives. Our approach uses a pre-computed set of cross-slice correspondences to compensate for missing data. We perform a qualitative analysis using sparse data sets, investigating both visual quality, as well divergence from the ground truth, testing the limits of the interpolation method. Our method produces high quality interpolation with a moderate performance impact compared to alternatives. It is ideal for reconstructing sparse data sets, as well as minimizing quality loss while scaling large amounts of data to fit on most mobile graphics cards.
{"title":"Registration-based interpolation real-time volume visualization","authors":"L. Laursen, H. Ólafsdóttir, J. A. Bærentzen, M. Hansen, B. Ersbøll","doi":"10.1145/2448531.2448533","DOIUrl":"https://doi.org/10.1145/2448531.2448533","url":null,"abstract":"Rendering tomographic data sets is a computationally expensive task, and often accomplished using hardware acceleration. The data sets are usually anisotropic as a result of the process used to acquire them. A vital part of rendering them is the conversion of the discrete signal back into a continuous one, via interpolation. On graphics hardware, this is often achieved via simple linear interpolation.\u0000 We present a novel approach for real-time anisotropic volume data interpolation on a graphics processing unit and draw comparisons to standardized interpolation alternatives. Our approach uses a pre-computed set of cross-slice correspondences to compensate for missing data. We perform a qualitative analysis using sparse data sets, investigating both visual quality, as well divergence from the ground truth, testing the limits of the interpolation method.\u0000 Our method produces high quality interpolation with a moderate performance impact compared to alternatives. It is ideal for reconstructing sparse data sets, as well as minimizing quality loss while scaling large amounts of data to fit on most mobile graphics cards.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133950019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Methods for procedural modelling tend to be designed either for organic objects, which are described well by skeletal structures, or for man-made objects, which are described well by surface primitives. Procedural methods, which allow for modelling of both kinds of objects, are few and usually of greater complexity. Consequently, there is a need for a simple, general method which is capable of generating both types of objects. Generic Graph Grammar has been developed to address this need. The production rules consist of a small set of basic productions which are applied directly onto primitives in a directed cyclic graph. Furthermore, the basic productions are chosen such that Generic Graph Grammar seamlessly combines the capabilities of L-systems to imitate biological growth (to model trees, animals, etc.) and those of split grammars to design structured objects (chairs, houses, etc.). This results in a highly expressive grammar capable of generating a wide range of types of models. Models which consist of skeletal structures or surfaces or any combination of these. Besides generic modelling capabilities, the focus has also been on usability, especially user-friendliness and efficiency. Therefore several steps have been taken to simplify the workflow as well as to make the modelling scheme interactive. As proof of concept, a generic procedural modelling tool based on Generic Graph Grammar has been developed.
{"title":"Generic graph grammar: a simple grammar for generic procedural modelling","authors":"A. N. Christiansen, J. A. Bærentzen","doi":"10.1145/2448531.2448542","DOIUrl":"https://doi.org/10.1145/2448531.2448542","url":null,"abstract":"Methods for procedural modelling tend to be designed either for organic objects, which are described well by skeletal structures, or for man-made objects, which are described well by surface primitives. Procedural methods, which allow for modelling of both kinds of objects, are few and usually of greater complexity. Consequently, there is a need for a simple, general method which is capable of generating both types of objects. Generic Graph Grammar has been developed to address this need. The production rules consist of a small set of basic productions which are applied directly onto primitives in a directed cyclic graph. Furthermore, the basic productions are chosen such that Generic Graph Grammar seamlessly combines the capabilities of L-systems to imitate biological growth (to model trees, animals, etc.) and those of split grammars to design structured objects (chairs, houses, etc.). This results in a highly expressive grammar capable of generating a wide range of types of models. Models which consist of skeletal structures or surfaces or any combination of these. Besides generic modelling capabilities, the focus has also been on usability, especially user-friendliness and efficiency. Therefore several steps have been taken to simplify the workflow as well as to make the modelling scheme interactive. As proof of concept, a generic procedural modelling tool based on Generic Graph Grammar has been developed.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131064412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visualizing three dimensional flow with geometry primitives is challenging due to inevitable clutter and occlusion. Our approach to tackling this problem is to utilize semi-transparent geometry as well as animation. Using semi-transparency, however, can make the visualization blurry and vague. We investigate perceptual limits and find specific guidelines on using semi-transparency for three dimensional flow visualization. We base our results on the user study that we conducted. The users were shown multiple semi-transparent overlapping layers of flow and were asked how many different flow directions they were able to discern. We utilized textured lines as geometric primitives; two general texture models were used to control opacity and create animation. We found that the number of high scoring textures is small compared to the total number of textures within our models. To test our findings, we utilized the high scoring textures to create visualizations of a variety of datasets.
{"title":"Effective texture models for three dimensional flow visualization","authors":"O. Mishchenko, R. Crawfis","doi":"10.1145/2448531.2448536","DOIUrl":"https://doi.org/10.1145/2448531.2448536","url":null,"abstract":"Visualizing three dimensional flow with geometry primitives is challenging due to inevitable clutter and occlusion. Our approach to tackling this problem is to utilize semi-transparent geometry as well as animation. Using semi-transparency, however, can make the visualization blurry and vague. We investigate perceptual limits and find specific guidelines on using semi-transparency for three dimensional flow visualization. We base our results on the user study that we conducted. The users were shown multiple semi-transparent overlapping layers of flow and were asked how many different flow directions they were able to discern. We utilized textured lines as geometric primitives; two general texture models were used to control opacity and create animation. We found that the number of high scoring textures is small compared to the total number of textures within our models. To test our findings, we utilized the high scoring textures to create visualizations of a variety of datasets.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114482266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bidirectional Reflectance Distribution Functions (BRDFs) are well-known functions in computer graphics, and these special functions represent the surface reflectance of materials. BRDFs can be viewed as multivariate probability density function (pdf) of incoming photons leaving in a particular outgoing direction. However, constructing a multivariate probability distribution for modeling a given BRDF is difficult. A family of distributions, namely Copula distributions have been used to approximate BRDF. In this work, we employ the Pair-Copula constructions to represent the measured BRDF densities. As the measured BRDF densities have large storage needs, we use Wavelet transforms for a compact BRDF representation. We also compare the proposed BRDF representation with a number of well-known BRDF models, and show that our compact BRDF representation provides good approximation to measured BRDF data.
{"title":"Representing BRDF by wavelet transformation of pair-copula constructions","authors":"A. Bilgili, A. Öztürk, Murat Kurt","doi":"10.1145/2448531.2448539","DOIUrl":"https://doi.org/10.1145/2448531.2448539","url":null,"abstract":"Bidirectional Reflectance Distribution Functions (BRDFs) are well-known functions in computer graphics, and these special functions represent the surface reflectance of materials. BRDFs can be viewed as multivariate probability density function (pdf) of incoming photons leaving in a particular outgoing direction. However, constructing a multivariate probability distribution for modeling a given BRDF is difficult. A family of distributions, namely Copula distributions have been used to approximate BRDF. In this work, we employ the Pair-Copula constructions to represent the measured BRDF densities. As the measured BRDF densities have large storage needs, we use Wavelet transforms for a compact BRDF representation. We also compare the proposed BRDF representation with a number of well-known BRDF models, and show that our compact BRDF representation provides good approximation to measured BRDF data.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130210715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Procedural modeling for architectural scenes was as yet limited to static objects only. We introduce a novel extension layer for shape grammars which creates a skeletal system for posing and interactive manipulation of generated models. Various models can be derived with the same set of parametrized rules for geometric operations. Separation of geometry generation and pose synthesis improves design efficiency and reusability. Moreover, by formal analysis of production rules we show how to efficiently update complex kinematic hierarchies created by the skeletons, allowing state-of-the-art interactive visual rule editing.
{"title":"Procedural skeletons: kinematic extensions to CGA-shape grammars","authors":"M. Ilcík, S. Fiedler, W. Purgathofer, M. Wimmer","doi":"10.1145/1925059.1925087","DOIUrl":"https://doi.org/10.1145/1925059.1925087","url":null,"abstract":"Procedural modeling for architectural scenes was as yet limited to static objects only. We introduce a novel extension layer for shape grammars which creates a skeletal system for posing and interactive manipulation of generated models. Various models can be derived with the same set of parametrized rules for geometric operations. Separation of geometry generation and pose synthesis improves design efficiency and reusability. Moreover, by formal analysis of production rules we show how to efficiently update complex kinematic hierarchies created by the skeletons, allowing state-of-the-art interactive visual rule editing.","PeriodicalId":235681,"journal":{"name":"Spring conference on Computer graphics","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129821843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}