A. Varchola, A. Vasko, Viliam Solcany, L. Dimitrov, M. Srámek
Although the main memory capacity of modern computers is constantly growing, the developers and users of data manipulation and visualization tools fight all over again with the problem of its shortage. In this paper, we advocate slice-based streaming as a possible solution for the memory shortage problem in the case of preprocessing and analysis of volumetric data defined over Cartesian, regular and other types of structured grids. In our version of streaming, data flows through independent processing units---filters---represented by individual system processes, which store each just a minimal fraction of the whole data set, with a slice as a basic data entity. Such filters can be easily interconnected in complex networks by means of standard interprocess communication using named pipes and are executed concurrently on a parallel system without a requirement of specific modification or explicit parallelization. In our technique, the amount of stored data by a filter is defined by the algorithm implemented therein, and is in most cases as small as one data slice or only several slices. Thus, the upper bound on the processed data volume is not any more defined by the main memory size but is shifted to the disc capacity, which is usually orders of magnitude larger. We propose implementations of this technique for various point, local and even global data processing operations, which may require multiple runs over the input data or eventually temporary data buffering. Further, we give a detailed performance analysis and show how well this approach fits to the current trend of employing cheap multicore processors and multiprocessor computers.
{"title":"Processing of volumetric data by slice- and process-based streaming","authors":"A. Varchola, A. Vasko, Viliam Solcany, L. Dimitrov, M. Srámek","doi":"10.1145/1294685.1294703","DOIUrl":"https://doi.org/10.1145/1294685.1294703","url":null,"abstract":"Although the main memory capacity of modern computers is constantly growing, the developers and users of data manipulation and visualization tools fight all over again with the problem of its shortage. In this paper, we advocate slice-based streaming as a possible solution for the memory shortage problem in the case of preprocessing and analysis of volumetric data defined over Cartesian, regular and other types of structured grids. In our version of streaming, data flows through independent processing units---filters---represented by individual system processes, which store each just a minimal fraction of the whole data set, with a slice as a basic data entity. Such filters can be easily interconnected in complex networks by means of standard interprocess communication using named pipes and are executed concurrently on a parallel system without a requirement of specific modification or explicit parallelization.\u0000 In our technique, the amount of stored data by a filter is defined by the algorithm implemented therein, and is in most cases as small as one data slice or only several slices. Thus, the upper bound on the processed data volume is not any more defined by the main memory size but is shifted to the disc capacity, which is usually orders of magnitude larger. We propose implementations of this technique for various point, local and even global data processing operations, which may require multiple runs over the input data or eventually temporary data buffering. Further, we give a detailed performance analysis and show how well this approach fits to the current trend of employing cheap multicore processors and multiprocessor computers.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124849176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Speech-synchronized facial animation forms an increasingly important aspect of computer animation. The majority of commercial animation products are produced using the English language. Major stakeholders in the industry are the producers of animated movies and the developers of computer games, while the creation of conversational agents for communication in cyberspace and for applications in, for example, language learning is an active field of investigation. It is, therefore, not surprising that most of the commercial facial animation and lip synchronization software caters mainly for English. Northern Sotho, one of the eleven official languages of South Africa, belongs to the so-called Bantu language family and is a resource-scarce (in terms of language resources, technological infrastructure and funding), lesser-studied language of the world. The general question as to whether facial animation tools mainly developed and used for English are appropriate for Northern Sotho speech animation is addressed. More specifically, we investigate what can be achieved with commercially available animation products for English. The paper reports on the process followed, the first results obtained and insights acquired. It is demonstrated that a variety of non-English (Northern Sotho) phonemes can indeed be modelled by tools developed for English by combining multiple different English phonemes and manipulating facial muscles and their actions.
{"title":"Towards a Northern Sotho talking head","authors":"Mauricio Radovan, L. Pretorius, A. E. Kotzé","doi":"10.1145/1294685.1294707","DOIUrl":"https://doi.org/10.1145/1294685.1294707","url":null,"abstract":"Speech-synchronized facial animation forms an increasingly important aspect of computer animation. The majority of commercial animation products are produced using the English language. Major stakeholders in the industry are the producers of animated movies and the developers of computer games, while the creation of conversational agents for communication in cyberspace and for applications in, for example, language learning is an active field of investigation. It is, therefore, not surprising that most of the commercial facial animation and lip synchronization software caters mainly for English. Northern Sotho, one of the eleven official languages of South Africa, belongs to the so-called Bantu language family and is a resource-scarce (in terms of language resources, technological infrastructure and funding), lesser-studied language of the world. The general question as to whether facial animation tools mainly developed and used for English are appropriate for Northern Sotho speech animation is addressed. More specifically, we investigate what can be achieved with commercially available animation products for English. The paper reports on the process followed, the first results obtained and insights acquired. It is demonstrated that a variety of non-English (Northern Sotho) phonemes can indeed be modelled by tools developed for English by combining multiple different English phonemes and manipulating facial muscles and their actions.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127708897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Existing texture synthesis-from example strategies for polygon meshes typically make use of three components: a multi-resolution mesh hierarchy that allows the overall nature of the pattern to be reproduced before filling in detail; a matching strategy that extends the synthesized texture using the best fit from a texture sample; and a transfer mechanism that copies the selected portion of the texture sample to the target surface. We introduce novel alternatives for each of these components. Use of √2-subdivision surfaces provides the mesh hierarchy and allows fine control over the surface complexity. Adaptive subdivision is used to create an even vertex distribution over the surface. Use of the graph defined by a surface region for matching, rather than a regular texture neighbourhood, provides for flexible control over the scale of the texture and allows simultaneous matching against multiple levels of an image pyramid created from the texture sample. We use graph cuts for texture transfer, adapting this scheme to the context of surface synthesis. The resulting surface textures are realistic, tolerant of local mesh detail and are comparable to results produced by texture neighbourhood sampling approaches.
{"title":"Graph matching with subdivision surfaces for texture synthesis on surfaces","authors":"S. Bangay, C. Morkel","doi":"10.1145/1108590.1108601","DOIUrl":"https://doi.org/10.1145/1108590.1108601","url":null,"abstract":"Existing texture synthesis-from example strategies for polygon meshes typically make use of three components: a multi-resolution mesh hierarchy that allows the overall nature of the pattern to be reproduced before filling in detail; a matching strategy that extends the synthesized texture using the best fit from a texture sample; and a transfer mechanism that copies the selected portion of the texture sample to the target surface. We introduce novel alternatives for each of these components. Use of √2-subdivision surfaces provides the mesh hierarchy and allows fine control over the surface complexity. Adaptive subdivision is used to create an even vertex distribution over the surface. Use of the graph defined by a surface region for matching, rather than a regular texture neighbourhood, provides for flexible control over the scale of the texture and allows simultaneous matching against multiple levels of an image pyramid created from the texture sample. We use graph cuts for texture transfer, adapting this scheme to the context of surface synthesis. The resulting surface textures are realistic, tolerant of local mesh detail and are comparable to results produced by texture neighbourhood sampling approaches.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130092175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Image space occlusion culling is a powerful approach to reduce the rendering load of large polygonal models. However, occlusion culling is not for free; it trades overhead costs with the rendering costs of the possibly occluded geometry. Meanwhile, occlusion queries based on image space occlusion culling are supported on modern graphics hardware. However, a significant consumption of fillrate bandwidth and latency costs are associated with these queries.In this paper, we propose new techniques to reduce redundant occlusion queries. Our approach uses several "Occupancy Maps" to organize scene traversal. The respective information is accumulated efficiently by hardware-supported asynchronous occlusion queries. To avoid redundant requests, we arrange these multiple occlusion queries according to the information of the Occupancy Maps. Our presented technique is conservative and benefits from a partial depth order of the geometry.
{"title":"Occlusion-driven scene sorting for efficient culling","authors":"Dirk Staneker, D. Bartz, W. Straßer","doi":"10.1145/1108590.1108607","DOIUrl":"https://doi.org/10.1145/1108590.1108607","url":null,"abstract":"Image space occlusion culling is a powerful approach to reduce the rendering load of large polygonal models. However, occlusion culling is not for free; it trades overhead costs with the rendering costs of the possibly occluded geometry. Meanwhile, occlusion queries based on image space occlusion culling are supported on modern graphics hardware. However, a significant consumption of fillrate bandwidth and latency costs are associated with these queries.In this paper, we propose new techniques to reduce redundant occlusion queries. Our approach uses several \"Occupancy Maps\" to organize scene traversal. The respective information is accumulated efficiently by hardware-supported asynchronous occlusion queries. To avoid redundant requests, we arrange these multiple occlusion queries according to the information of the Occupancy Maps. Our presented technique is conservative and benefits from a partial depth order of the geometry.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134042035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We modify a selection of interactive modeling tools for use in a procedural modeling environment. These tools are selection, extrusion, subdivision and curve shaping. We create human models to demonstrate that these tools are appropriate for use on hierarchical objects. Our tools support the main benefits of procedural modeling, which are: the use of parameterisation to control and very a model, varying levels of detail, increased model complexity, base shape independence and database amplification. We demonstrate scripts which provide each of these benefits.
{"title":"Procedural modeling facilities for hierarchical object generation","authors":"C. Morkel, S. Bangay","doi":"10.1145/1108590.1108614","DOIUrl":"https://doi.org/10.1145/1108590.1108614","url":null,"abstract":"We modify a selection of interactive modeling tools for use in a procedural modeling environment. These tools are selection, extrusion, subdivision and curve shaping. We create human models to demonstrate that these tools are appropriate for use on hierarchical objects. Our tools support the main benefits of procedural modeling, which are: the use of parameterisation to control and very a model, varying levels of detail, increased model complexity, base shape independence and database amplification. We demonstrate scripts which provide each of these benefits.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131289722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we describe a novel approach to 3D shape modelling, targeting at the reconstruction and repair of digitised models -- a task that is frequently encountered in particular in the fields of cultural heritage and archaeology. In these fields, faithfully digitised models are often to be restorated in order to visualise the object in its original state, reversing the effects of aging or decay. In our approach, we combine intuitive free-form modelling techniques with automatic 3D surface completion to derive a powerful modelling methodology that on the one hand is capable of including a user's expertise into the surface completion process. The automatic completion, on the other hand, reconstructs the required surface detail in the modelled region and thus frees the user from the need to model every last detail manually. The power and feasibility of our approach is demonstrated with several examples.
{"title":"Free-form modelling for surface inpainting","authors":"G. Bendels, M. Guthe, R. Klein","doi":"10.1145/1108590.1108599","DOIUrl":"https://doi.org/10.1145/1108590.1108599","url":null,"abstract":"In this paper, we describe a novel approach to 3D shape modelling, targeting at the reconstruction and repair of digitised models -- a task that is frequently encountered in particular in the fields of cultural heritage and archaeology. In these fields, faithfully digitised models are often to be restorated in order to visualise the object in its original state, reversing the effects of aging or decay. In our approach, we combine intuitive free-form modelling techniques with automatic 3D surface completion to derive a powerful modelling methodology that on the one hand is capable of including a user's expertise into the surface completion process. The automatic completion, on the other hand, reconstructs the required surface detail in the modelled region and thus frees the user from the need to model every last detail manually. The power and feasibility of our approach is demonstrated with several examples.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116930542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a subdivision based algorithm for multi-resolution Hexahedral meshing. The input is a bounding rectilinear domain with a set of embedded 2-manifold boundaries of arbitrary genus and topology. The algorithm first constructs a simplified Voronoi structure to partition the object into individual components that can be then meshed separately. We create a coarse hexahedral mesh for each Voronoi cell giving us an initial hexahedral scaffold. Recursive hexahedral subdivision of this hexahedral scaffold yields adaptive meshes. Splitting and Smoothing the boundary cells makes the mesh conform to the input 2-manifolds. Our choice of smoothing rules makes the resulting boundary surface of the hexahedral mesh as C2 continuous in the limit (C1 at extra-ordinary points), while also keeping a definite bound on the condition number of the Jacobian of the hexahedral mesh elements. By modifying the crease smoothing rules, we can also guarantee that the sharp features in the data are captured. Subdivision guarantees that we achieve a very good approximation for a given tolerance, with optimal mesh elements for each Level of Detail (LoD).
{"title":"Volume subdivision based hexahedral finite element meshing of domains with interior 2-manifold boundaries","authors":"C. Bajaj, L. C. Karlapalem","doi":"10.1145/1108590.1108611","DOIUrl":"https://doi.org/10.1145/1108590.1108611","url":null,"abstract":"We present a subdivision based algorithm for multi-resolution Hexahedral meshing. The input is a bounding rectilinear domain with a set of embedded 2-manifold boundaries of arbitrary genus and topology. The algorithm first constructs a simplified Voronoi structure to partition the object into individual components that can be then meshed separately. We create a coarse hexahedral mesh for each Voronoi cell giving us an initial hexahedral scaffold. Recursive hexahedral subdivision of this hexahedral scaffold yields adaptive meshes. Splitting and Smoothing the boundary cells makes the mesh conform to the input 2-manifolds. Our choice of smoothing rules makes the resulting boundary surface of the hexahedral mesh as C2 continuous in the limit (C1 at extra-ordinary points), while also keeping a definite bound on the condition number of the Jacobian of the hexahedral mesh elements. By modifying the crease smoothing rules, we can also guarantee that the sharp features in the data are captured. Subdivision guarantees that we achieve a very good approximation for a given tolerance, with optimal mesh elements for each Level of Detail (LoD).","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124977164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work we present a rendering method with guaranteed interactive frame-rates in complex 3D scenes. The algorithm is based on an new data structure determined in a preprocessing to avoid frozen displays in large simulative visualizations like industrial plants, typically described as CAD-Models. Within a preprocessing polygons are grouped by size and within these groups core-clusters are calculated based on similarity and locality. The clusters and polygons are building up a hierarchy including weights ascertained within repetitive stages of re-grouping and re-clustering. This additional information allows to choose a subset over all primitives to reduce scene complexity depending on the viewer's position, sight and the determined weights within the hierarchy. To guarantee a specific frame rate the number of rendered primitives is limited by a constant and typically constrained by hardware. This reduction is controlled by the pre-calculated weights, and the viewer's position and is not done arbitrarily. At least the rendered section is a suitable scene approximation that includes the viewer's interests. Combining all this a constant frame-rate including 140 million polygons at 12 fps is obtainable. Practical results indicate that our approach leads to good scene approximations and realtime rendering of very large environments at the same time.
{"title":"Size equivalent cluster trees (SEC-Trees) realtime rendering of large industrial scenes","authors":"Michael Kortenjan, Gunnar Schomaker","doi":"10.1145/1108590.1108608","DOIUrl":"https://doi.org/10.1145/1108590.1108608","url":null,"abstract":"In this work we present a rendering method with guaranteed interactive frame-rates in complex 3D scenes. The algorithm is based on an new data structure determined in a preprocessing to avoid frozen displays in large simulative visualizations like industrial plants, typically described as CAD-Models. Within a preprocessing polygons are grouped by size and within these groups core-clusters are calculated based on similarity and locality. The clusters and polygons are building up a hierarchy including weights ascertained within repetitive stages of re-grouping and re-clustering. This additional information allows to choose a subset over all primitives to reduce scene complexity depending on the viewer's position, sight and the determined weights within the hierarchy. To guarantee a specific frame rate the number of rendered primitives is limited by a constant and typically constrained by hardware. This reduction is controlled by the pre-calculated weights, and the viewer's position and is not done arbitrarily. At least the rendered section is a suitable scene approximation that includes the viewer's interests. Combining all this a constant frame-rate including 140 million polygons at 12 fps is obtainable. Practical results indicate that our approach leads to good scene approximations and realtime rendering of very large environments at the same time.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130918592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The identification of mammals through the use of their hair is important in the fields of forensics and ecology. The application of computer pattern recognition techniques to this process provides a means of reducing the subjectivity found in the process, as manual techniques rely on the interpretation of a human expert rather than quantitative measures. The first application of image pattern recognition techniques to the classification of African mammalian species using hair patterns is presented. This application uses a 2D Gabor filter-bank and motivates the use of moments to classify hair scale patterns. Application of a 2D Gabor filter-bank to hair scale processing provides results of 52% accuracy when using a filter-bank of size four and 72% accuracy when using a filter-bank of size eight. These initial results indicate that 2D Gabor filters produce information that may be successfully used to classify hair according to images of its patterns.
{"title":"The identification of mammalian species through the classification of hair patterns using image pattern recognition","authors":"Thamsanqa Moyo, S. Bangay, G. Foster","doi":"10.1145/1108590.1108619","DOIUrl":"https://doi.org/10.1145/1108590.1108619","url":null,"abstract":"The identification of mammals through the use of their hair is important in the fields of forensics and ecology. The application of computer pattern recognition techniques to this process provides a means of reducing the subjectivity found in the process, as manual techniques rely on the interpretation of a human expert rather than quantitative measures. The first application of image pattern recognition techniques to the classification of African mammalian species using hair patterns is presented. This application uses a 2D Gabor filter-bank and motivates the use of moments to classify hair scale patterns. Application of a 2D Gabor filter-bank to hair scale processing provides results of 52% accuracy when using a filter-bank of size four and 72% accuracy when using a filter-bank of size eight. These initial results indicate that 2D Gabor filters produce information that may be successfully used to classify hair according to images of its patterns.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130101900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The computation of high-fidelity images in real-time remains one of the key challenges for computer graphics. Recent work has shown that by understanding the human visual system, selective rendering may be used to render only those parts to which the human viewer is attending at high quality and the rest of the scene at a much lower quality. This can result in a significant reduction in computational time, without the viewer being aware of the quality difference. Selective rendering is guided by models of the human visual system, typically in the form of a 2D saliency map, which predict where the user will be looking in any scene. Computation of these maps themselves often take many seconds, thus precluding such an approach in any interactive system, where many frames need to be rendered per second. In this paper we present a novel saliency map which exploits the computational performance of modern GPUs. With our approach it is thus possible to calculate this map in milliseconds, allowing it to be part of a real time rendering system. In addition, we also show how depth, habituation and motion can be added to the saliency map to further guide the selective rendering. This ensures that only the most perceptually important parts of any animated sequence need be rendered in high quality. The rest of the animation can be rendered at a significantly lower quality, and thus much lower computational cost, without the user being aware of this difference.
{"title":"A GPU based saliency map for high-fidelity selective rendering","authors":"P. Longhurst, K. Debattista, A. Chalmers","doi":"10.1145/1108590.1108595","DOIUrl":"https://doi.org/10.1145/1108590.1108595","url":null,"abstract":"The computation of high-fidelity images in real-time remains one of the key challenges for computer graphics. Recent work has shown that by understanding the human visual system, selective rendering may be used to render only those parts to which the human viewer is attending at high quality and the rest of the scene at a much lower quality. This can result in a significant reduction in computational time, without the viewer being aware of the quality difference. Selective rendering is guided by models of the human visual system, typically in the form of a 2D saliency map, which predict where the user will be looking in any scene. Computation of these maps themselves often take many seconds, thus precluding such an approach in any interactive system, where many frames need to be rendered per second. In this paper we present a novel saliency map which exploits the computational performance of modern GPUs. With our approach it is thus possible to calculate this map in milliseconds, allowing it to be part of a real time rendering system. In addition, we also show how depth, habituation and motion can be added to the saliency map to further guide the selective rendering. This ensures that only the most perceptually important parts of any animated sequence need be rendered in high quality. The rest of the animation can be rendered at a significantly lower quality, and thus much lower computational cost, without the user being aware of this difference.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123370961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}