The goal of interactive walkthroughs in three dimensional computer reconstructions is to give people a sensation of immersion in different sites at different periods. Realism of these walkthroughs is achieved not only with detailed 3D models but also with a correct illumination regarding the means of lighting in those times. Working on the enhancement of the visual appearance of the computer reconstruction of the Gallo-Roman forum of Bavay, we propose a model that reproduces the shape, animation and illumination of simple flames produced by candles and oil lamps in real-time. Flame dynamics is simulated using a Navier-Stokes equation solver animating particle skeletons. Its shape is obtained using those particles as control points of a NURBS surface. The photometric distribution of a real flame is captured by a spectrophotometer and stored into a photometric solid. This one is used as a spherical texture in a pixel shader to compute accurately the illumination produced by the flame in any direction. Our model is compatible with existing shadow algorithms and designed to be easily incorporated in any cultural heritage real-time application.
{"title":"Enhanced illumination of reconstructed dynamic environments using a real-time flame model","authors":"Flavien Bridault, M. Leblond, F. Rousselle","doi":"10.1145/1108590.1108596","DOIUrl":"https://doi.org/10.1145/1108590.1108596","url":null,"abstract":"The goal of interactive walkthroughs in three dimensional computer reconstructions is to give people a sensation of immersion in different sites at different periods. Realism of these walkthroughs is achieved not only with detailed 3D models but also with a correct illumination regarding the means of lighting in those times. Working on the enhancement of the visual appearance of the computer reconstruction of the Gallo-Roman forum of Bavay, we propose a model that reproduces the shape, animation and illumination of simple flames produced by candles and oil lamps in real-time. Flame dynamics is simulated using a Navier-Stokes equation solver animating particle skeletons. Its shape is obtained using those particles as control points of a NURBS surface. The photometric distribution of a real flame is captured by a spectrophotometer and stored into a photometric solid. This one is used as a spherical texture in a pixel shader to compute accurately the illumination produced by the flame in any direction. Our model is compatible with existing shadow algorithms and designed to be easily incorporated in any cultural heritage real-time application.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131173472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For the interactive construction of CSG models understanding the layout of the models is essential to ease their efficient manipulation. To comprehend position and orientation of the aggregated components of a CSG model, we need to realize its visible and occluded parts as a whole. Hence, transparency and enhanced outlines are key techniques to communicate deeper insights.We present a novel real-time non-photorealistic rendering technique that illustrates design and spatial assembly of CSG models.As enabling technology we first present a solution for combining depth peeling with image-based CSG rendering. The rendering technique can then extract layers of ordered depth from the CSG model up to its entire depth complexity. Capturing the surface colors of each layer and combining the results thereafter synthesizes order-independent transparency as one major illustration technique for interactive CSG.We further define perceptually important edges of CSG models and integrate an image-space edge-enhancement technique that can detect them in each layer. In order to outline the model's layout, the rendering technique extracts perceptually important edges that are directly visible, i.e., edges that lie on the model's outer surface, or edges that are occluded, i.e., edges that are hidden by its interior composition. Finally, we combine these edges with the order-independent transparent depictions to generate edge-enhanced illustrations, which provide a clear insight into the CSG models, let realize their complex, spatial assembly, and, thus, simplify their interactive construction.
{"title":"Illustrating design and spatial assembly of interactive CSG","authors":"M. Nienhaus, Florian Kirsch, J. Döllner","doi":"10.1145/1108590.1108605","DOIUrl":"https://doi.org/10.1145/1108590.1108605","url":null,"abstract":"For the interactive construction of CSG models understanding the layout of the models is essential to ease their efficient manipulation. To comprehend position and orientation of the aggregated components of a CSG model, we need to realize its visible and occluded parts as a whole. Hence, transparency and enhanced outlines are key techniques to communicate deeper insights.We present a novel real-time non-photorealistic rendering technique that illustrates design and spatial assembly of CSG models.As enabling technology we first present a solution for combining depth peeling with image-based CSG rendering. The rendering technique can then extract layers of ordered depth from the CSG model up to its entire depth complexity. Capturing the surface colors of each layer and combining the results thereafter synthesizes order-independent transparency as one major illustration technique for interactive CSG.We further define perceptually important edges of CSG models and integrate an image-space edge-enhancement technique that can detect them in each layer. In order to outline the model's layout, the rendering technique extracts perceptually important edges that are directly visible, i.e., edges that lie on the model's outer surface, or edges that are occluded, i.e., edges that are hidden by its interior composition. Finally, we combine these edges with the order-independent transparent depictions to generate edge-enhanced illustrations, which provide a clear insight into the CSG models, let realize their complex, spatial assembly, and, thus, simplify their interactive construction.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126028326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes the Universal Interaction Controller (UIC), a user interface framework and device designed to support interactions in ubiquitous computing environments, and the in-situ visualisation of ambient information in environments equipped with multiple heterogeneous displays. We describe the device and the infrastructure we have created to support it. We present the use of augmented reality to display information that is outside the bounds of traditional display surfaces.
{"title":"Interaction and visualisation across multiple displays in ubiquitous computing environments","authors":"H. Slay, B. Thomas","doi":"10.1145/1108590.1108603","DOIUrl":"https://doi.org/10.1145/1108590.1108603","url":null,"abstract":"This paper describes the Universal Interaction Controller (UIC), a user interface framework and device designed to support interactions in ubiquitous computing environments, and the in-situ visualisation of ambient information in environments equipped with multiple heterogeneous displays. We describe the device and the infrastructure we have created to support it. We present the use of augmented reality to display information that is outside the bounds of traditional display surfaces.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129830645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Image segmentation requires a segmentation tool that is fast and easy to use. The GIMP has built in segmentation tools, but under some circumstances these tools perform badly. "GrabCut" is an innovative segmentation technique that uses both region and boundary information in order to perform segmentation. Several variations on the "GrabCut" algorithm have been implemented as a plugin for the GIMP. The results obtained using "GrabCut" are comparable, and often better than the results of all the other built in segmentation tools.
{"title":"Implementing the \"GrabCut\" segmentation technique as a plugin for the GIMP","authors":"M. Marsh, S. Bangay, A. Lobb","doi":"10.1145/1108590.1108618","DOIUrl":"https://doi.org/10.1145/1108590.1108618","url":null,"abstract":"Image segmentation requires a segmentation tool that is fast and easy to use. The GIMP has built in segmentation tools, but under some circumstances these tools perform badly. \"GrabCut\" is an innovative segmentation technique that uses both region and boundary information in order to perform segmentation. Several variations on the \"GrabCut\" algorithm have been implemented as a plugin for the GIMP. The results obtained using \"GrabCut\" are comparable, and often better than the results of all the other built in segmentation tools.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126641797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article illustrates the merits of visual analysis as it presents preliminary findings using InetVis - an animated 3-D scatter plot visualization of network events. The concepts and features of InetVis are evaluated with reference to related work in the field. Tested against a network scanning tool, anticipated visual signs of port scanning and network mapping serve as a proof of concept. This research also unveils substantial amounts of suspicious activity present in Internet traffic during August 2005, as captured by a class C network telescope. InetVis is found to have promising scalability whilst offering salient depictions of intrusive network activity.
{"title":"InetVis, a visual tool for network telescope traffic analysis","authors":"J. V. Riel, B. Irwin","doi":"10.1145/1108590.1108604","DOIUrl":"https://doi.org/10.1145/1108590.1108604","url":null,"abstract":"This article illustrates the merits of visual analysis as it presents preliminary findings using InetVis - an animated 3-D scatter plot visualization of network events. The concepts and features of InetVis are evaluated with reference to related work in the field. Tested against a network scanning tool, anticipated visual signs of port scanning and network mapping serve as a proof of concept. This research also unveils substantial amounts of suspicious activity present in Internet traffic during August 2005, as captured by a class C network telescope. InetVis is found to have promising scalability whilst offering salient depictions of intrusive network activity.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130773435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new technique for generating virtual environments is proposed, whereby the user describes the environment that they wish to create using adjectives. An entire scene is then procedurally generated, based on the mapping of these adjectives to the parameter space of the procedural models used. This mapping is determined through a pre-process, during which the user is presented with a number of scenes and asked to describe them using adjectives. With such a technique, the ability to create complex virtual environments is extended to users with little or no technical knowledge, and additionally provides a means for experienced users to quickly generate a large, complex environment which can then be modified by hand.
{"title":"Affective scene generation","authors":"C. Hultquist, J. Gain, David E. Cairns","doi":"10.1145/1108590.1108600","DOIUrl":"https://doi.org/10.1145/1108590.1108600","url":null,"abstract":"A new technique for generating virtual environments is proposed, whereby the user describes the environment that they wish to create using adjectives. An entire scene is then procedurally generated, based on the mapping of these adjectives to the parameter space of the procedural models used. This mapping is determined through a pre-process, during which the user is presented with a number of scenes and asked to describe them using adjectives. With such a technique, the ability to create complex virtual environments is extended to users with little or no technical knowledge, and additionally provides a means for experienced users to quickly generate a large, complex environment which can then be modified by hand.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128268668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work forms part of the development of a framework for semantic extraction in road traffic monitoring. In this paper we develop a scene, object and event model based on regions in the ground plane. The model is formally specified using the Güting spatio-temporal formalism for moving regions and Z notation. The result is domain-independent knowledge representation that supports reasoning about time-varying regions and that is expressed in an accessible mathematical formalism.
{"title":"Formal specification of region-based model for semantic extraction in road traffic monitoring","authors":"Johan Köhler, J. Tapamo","doi":"10.1145/1108590.1108615","DOIUrl":"https://doi.org/10.1145/1108590.1108615","url":null,"abstract":"This work forms part of the development of a framework for semantic extraction in road traffic monitoring. In this paper we develop a scene, object and event model based on regions in the ground plane. The model is formally specified using the Güting spatio-temporal formalism for moving regions and Z notation. The result is domain-independent knowledge representation that supports reasoning about time-varying regions and that is expressed in an accessible mathematical formalism.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130054457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a framework for the rapid detection and 3D localisation of bullets (or other compact shapes) from a sparse set of cross-sectional patient x-rays. The intention of this work is to assess a software architecture for an application specific alternative to conventional CT which can be leveraged in poor communities using less expensive technology. Of necessity such a system will not provide the diagnostic sophistication of full CT, but in many cases this added complexity may not be required. While a pair of x-rays can provide some 3D positional information to a clinician, such an assessment is qualitative and occluding tissue/bone may lead to an incorrect assessment of the internal location of the bullet.Our system uses a combination of model-based segmentation and CT-like back-projection to arrive at an approximate volume representation of the embedded shape, based on a sequence of x-rays which encompasses the affected area. Depending on the nature of the injury, such a 3D shape approximation may provide sufficient information for surgical intervention.The results of our proof-of-concept study show that, algorithmically, such system is indeed realisable: a 3D reconstruction is possible from a small set of x-rays, with only a small computational load. A combination of real x-rays and simulated 3D data are used to evaluate the technique.
{"title":"Identification and reconstruction of bullets from multiple X-rays","authors":"Simon J. Perkins, P. Marais","doi":"10.1145/1108590.1108610","DOIUrl":"https://doi.org/10.1145/1108590.1108610","url":null,"abstract":"We present a framework for the rapid detection and 3D localisation of bullets (or other compact shapes) from a sparse set of cross-sectional patient x-rays. The intention of this work is to assess a software architecture for an application specific alternative to conventional CT which can be leveraged in poor communities using less expensive technology. Of necessity such a system will not provide the diagnostic sophistication of full CT, but in many cases this added complexity may not be required. While a pair of x-rays can provide some 3D positional information to a clinician, such an assessment is qualitative and occluding tissue/bone may lead to an incorrect assessment of the internal location of the bullet.Our system uses a combination of model-based segmentation and CT-like back-projection to arrive at an approximate volume representation of the embedded shape, based on a sequence of x-rays which encompasses the affected area. Depending on the nature of the injury, such a 3D shape approximation may provide sufficient information for surgical intervention.The results of our proof-of-concept study show that, algorithmically, such system is indeed realisable: a 3D reconstruction is possible from a small set of x-rays, with only a small computational load. A combination of real x-rays and simulated 3D data are used to evaluate the technique.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"204 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116182598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Gillibrand, P. Longhurst, K. Debattista, A. Chalmers
The media industry is demanding increasing fidelity for their rendered images. Despite the advent of modern GPUs, the computational requirements of physically based global illumination algorithms are such that it is still not possible to render high-fidelity images in real time. The time constraints of commercial rendering are such that the user would like to have an idea as to just how long it will take to render an animated sequence, prior the actual rendering. This information is necessary to determine whether the desired quality is achievable in the time available or indeed if it is possible to afford to carry out the work on a render farm for example. This paper presents a comparison of different pixel profiling strategies which may be used to predict the overall rendering cost of a high fidelity global illumination solution. A fast rasterised scene preview is proposed which provides a more accurate positioning and weighting of samples, to achieve accurate cost prediction.
{"title":"Cost prediction for global illumination using a fast rasterised scene preview","authors":"R. Gillibrand, P. Longhurst, K. Debattista, A. Chalmers","doi":"10.1145/1108590.1108597","DOIUrl":"https://doi.org/10.1145/1108590.1108597","url":null,"abstract":"The media industry is demanding increasing fidelity for their rendered images. Despite the advent of modern GPUs, the computational requirements of physically based global illumination algorithms are such that it is still not possible to render high-fidelity images in real time. The time constraints of commercial rendering are such that the user would like to have an idea as to just how long it will take to render an animated sequence, prior the actual rendering. This information is necessary to determine whether the desired quality is achievable in the time available or indeed if it is possible to afford to carry out the work on a render farm for example. This paper presents a comparison of different pixel profiling strategies which may be used to predict the overall rendering cost of a high fidelity global illumination solution. A fast rasterised scene preview is proposed which provides a more accurate positioning and weighting of samples, to achieve accurate cost prediction.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122163963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The formation of informal settlements in and around urban complexes has largely been ignored in the context of procedural city modeling. However, many cities in South Africa and globally can attest to the presence of such settlements. This paper analyses the phenomenon of informal settlements from a procedural modeling perspective. Aerial photography from two South African urban complexes, namely Johannesburg and Cape Town is used as a basis for the extraction of various features that distinguish different types of settlements. In particular, the road patterns which have formed within such settlements are analysed, and various procedural techniques proposed (including Voronoi diagrams, subdivision and L-systems) to replicate the identified features. A qualitative assessment of the procedural techniques is provided, and the most suitable combination of techniques identified for unstructured and structured settlements. In particular it is found that a combination of Voronoi diagrams and subdivision provides the closest match to unstructured informal settlements. A combination of L-systems, Voronoi diagrams and subdivision is found to produce the closest pattern to a structured informal settlement.
{"title":"Duplicating road patterns in south african informal settlements using procedural techniques","authors":"Kevin R. Glass, C. Morkel, S. Bangay","doi":"10.1145/1108590.1108616","DOIUrl":"https://doi.org/10.1145/1108590.1108616","url":null,"abstract":"The formation of informal settlements in and around urban complexes has largely been ignored in the context of procedural city modeling. However, many cities in South Africa and globally can attest to the presence of such settlements. This paper analyses the phenomenon of informal settlements from a procedural modeling perspective. Aerial photography from two South African urban complexes, namely Johannesburg and Cape Town is used as a basis for the extraction of various features that distinguish different types of settlements. In particular, the road patterns which have formed within such settlements are analysed, and various procedural techniques proposed (including Voronoi diagrams, subdivision and L-systems) to replicate the identified features. A qualitative assessment of the procedural techniques is provided, and the most suitable combination of techniques identified for unstructured and structured settlements. In particular it is found that a combination of Voronoi diagrams and subdivision provides the closest match to unstructured informal settlements. A combination of L-systems, Voronoi diagrams and subdivision is found to produce the closest pattern to a structured informal settlement.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131761073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}