首页 > 最新文献

International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa最新文献

英文 中文
Enhanced illumination of reconstructed dynamic environments using a real-time flame model 利用实时火焰模型增强重建动态环境的照明
Flavien Bridault, M. Leblond, F. Rousselle
The goal of interactive walkthroughs in three dimensional computer reconstructions is to give people a sensation of immersion in different sites at different periods. Realism of these walkthroughs is achieved not only with detailed 3D models but also with a correct illumination regarding the means of lighting in those times. Working on the enhancement of the visual appearance of the computer reconstruction of the Gallo-Roman forum of Bavay, we propose a model that reproduces the shape, animation and illumination of simple flames produced by candles and oil lamps in real-time. Flame dynamics is simulated using a Navier-Stokes equation solver animating particle skeletons. Its shape is obtained using those particles as control points of a NURBS surface. The photometric distribution of a real flame is captured by a spectrophotometer and stored into a photometric solid. This one is used as a spherical texture in a pixel shader to compute accurately the illumination produced by the flame in any direction. Our model is compatible with existing shadow algorithms and designed to be easily incorporated in any cultural heritage real-time application.
三维计算机重建中的交互式攻略的目的是让人们在不同的地点、不同的时期有一种身临其境的感觉。这些演练的真实感不仅是通过详细的3D模型实现的,而且是通过正确的照明来实现的。为了增强巴维高卢罗马广场的计算机重建的视觉外观,我们提出了一个模型,可以实时再现蜡烛和油灯产生的简单火焰的形状、动画和照明。火焰动力学模拟使用Navier-Stokes方程求解器动画粒子骨架。其形状是使用这些粒子作为NURBS表面的控制点获得的。真实火焰的光度分布由分光光度计捕获并存储到光度固体中。这个被用作像素着色器中的球形纹理,以准确地计算火焰在任何方向上产生的照明。我们的模型与现有的阴影算法兼容,可以很容易地融入到任何文化遗产的实时应用中。
{"title":"Enhanced illumination of reconstructed dynamic environments using a real-time flame model","authors":"Flavien Bridault, M. Leblond, F. Rousselle","doi":"10.1145/1108590.1108596","DOIUrl":"https://doi.org/10.1145/1108590.1108596","url":null,"abstract":"The goal of interactive walkthroughs in three dimensional computer reconstructions is to give people a sensation of immersion in different sites at different periods. Realism of these walkthroughs is achieved not only with detailed 3D models but also with a correct illumination regarding the means of lighting in those times. Working on the enhancement of the visual appearance of the computer reconstruction of the Gallo-Roman forum of Bavay, we propose a model that reproduces the shape, animation and illumination of simple flames produced by candles and oil lamps in real-time. Flame dynamics is simulated using a Navier-Stokes equation solver animating particle skeletons. Its shape is obtained using those particles as control points of a NURBS surface. The photometric distribution of a real flame is captured by a spectrophotometer and stored into a photometric solid. This one is used as a spherical texture in a pixel shader to compute accurately the illumination produced by the flame in any direction. Our model is compatible with existing shadow algorithms and designed to be easily incorporated in any cultural heritage real-time application.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131173472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Illustrating design and spatial assembly of interactive CSG 说明交互式CSG的设计和空间组装
M. Nienhaus, Florian Kirsch, J. Döllner
For the interactive construction of CSG models understanding the layout of the models is essential to ease their efficient manipulation. To comprehend position and orientation of the aggregated components of a CSG model, we need to realize its visible and occluded parts as a whole. Hence, transparency and enhanced outlines are key techniques to communicate deeper insights.We present a novel real-time non-photorealistic rendering technique that illustrates design and spatial assembly of CSG models.As enabling technology we first present a solution for combining depth peeling with image-based CSG rendering. The rendering technique can then extract layers of ordered depth from the CSG model up to its entire depth complexity. Capturing the surface colors of each layer and combining the results thereafter synthesizes order-independent transparency as one major illustration technique for interactive CSG.We further define perceptually important edges of CSG models and integrate an image-space edge-enhancement technique that can detect them in each layer. In order to outline the model's layout, the rendering technique extracts perceptually important edges that are directly visible, i.e., edges that lie on the model's outer surface, or edges that are occluded, i.e., edges that are hidden by its interior composition. Finally, we combine these edges with the order-independent transparent depictions to generate edge-enhanced illustrations, which provide a clear insight into the CSG models, let realize their complex, spatial assembly, and, thus, simplify their interactive construction.
对于CSG模型的交互式构建,了解模型的布局是简化其有效操作的关键。为了理解CSG模型中聚合组件的位置和方向,我们需要将其可见部分和遮挡部分作为一个整体来实现。因此,透明度和增强的轮廓是沟通更深入见解的关键技术。我们提出了一种新的实时非真实感渲染技术来说明CSG模型的设计和空间组装。作为启用技术,我们首先提出了将深度剥离与基于图像的CSG渲染相结合的解决方案。然后,渲染技术可以从CSG模型中提取有序深度的层,直至其整个深度复杂度。捕获每一层的表面颜色,然后将结果结合起来,合成与顺序无关的透明度,作为交互式CSG的主要插图技术。我们进一步定义了感知上重要的CSG模型边缘,并集成了一种图像空间边缘增强技术,可以在每一层检测到它们。为了勾勒出模型的布局,渲染技术提取直接可见的感知上重要的边缘,即位于模型外表面的边缘,或被遮挡的边缘,即被其内部构成隐藏的边缘。最后,我们将这些边缘与顺序无关的透明描述结合起来,生成边缘增强插图,从而清晰地洞察CSG模型,实现其复杂的空间组装,从而简化其交互构建。
{"title":"Illustrating design and spatial assembly of interactive CSG","authors":"M. Nienhaus, Florian Kirsch, J. Döllner","doi":"10.1145/1108590.1108605","DOIUrl":"https://doi.org/10.1145/1108590.1108605","url":null,"abstract":"For the interactive construction of CSG models understanding the layout of the models is essential to ease their efficient manipulation. To comprehend position and orientation of the aggregated components of a CSG model, we need to realize its visible and occluded parts as a whole. Hence, transparency and enhanced outlines are key techniques to communicate deeper insights.We present a novel real-time non-photorealistic rendering technique that illustrates design and spatial assembly of CSG models.As enabling technology we first present a solution for combining depth peeling with image-based CSG rendering. The rendering technique can then extract layers of ordered depth from the CSG model up to its entire depth complexity. Capturing the surface colors of each layer and combining the results thereafter synthesizes order-independent transparency as one major illustration technique for interactive CSG.We further define perceptually important edges of CSG models and integrate an image-space edge-enhancement technique that can detect them in each layer. In order to outline the model's layout, the rendering technique extracts perceptually important edges that are directly visible, i.e., edges that lie on the model's outer surface, or edges that are occluded, i.e., edges that are hidden by its interior composition. Finally, we combine these edges with the order-independent transparent depictions to generate edge-enhanced illustrations, which provide a clear insight into the CSG models, let realize their complex, spatial assembly, and, thus, simplify their interactive construction.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126028326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Interaction and visualisation across multiple displays in ubiquitous computing environments 在普适计算环境中跨多个显示器的交互和可视化
H. Slay, B. Thomas
This paper describes the Universal Interaction Controller (UIC), a user interface framework and device designed to support interactions in ubiquitous computing environments, and the in-situ visualisation of ambient information in environments equipped with multiple heterogeneous displays. We describe the device and the infrastructure we have created to support it. We present the use of augmented reality to display information that is outside the bounds of traditional display surfaces.
本文描述了通用交互控制器(UIC),这是一种用户界面框架和设备,旨在支持泛在计算环境中的交互,以及在配备多个异构显示器的环境中对环境信息的现场可视化。我们描述了设备和我们为支持它而创建的基础设施。我们提出使用增强现实来显示传统显示表面范围之外的信息。
{"title":"Interaction and visualisation across multiple displays in ubiquitous computing environments","authors":"H. Slay, B. Thomas","doi":"10.1145/1108590.1108603","DOIUrl":"https://doi.org/10.1145/1108590.1108603","url":null,"abstract":"This paper describes the Universal Interaction Controller (UIC), a user interface framework and device designed to support interactions in ubiquitous computing environments, and the in-situ visualisation of ambient information in environments equipped with multiple heterogeneous displays. We describe the device and the infrastructure we have created to support it. We present the use of augmented reality to display information that is outside the bounds of traditional display surfaces.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129830645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Implementing the "GrabCut" segmentation technique as a plugin for the GIMP 实现“GrabCut”分割技术作为GIMP的插件
M. Marsh, S. Bangay, A. Lobb
Image segmentation requires a segmentation tool that is fast and easy to use. The GIMP has built in segmentation tools, but under some circumstances these tools perform badly. "GrabCut" is an innovative segmentation technique that uses both region and boundary information in order to perform segmentation. Several variations on the "GrabCut" algorithm have been implemented as a plugin for the GIMP. The results obtained using "GrabCut" are comparable, and often better than the results of all the other built in segmentation tools.
图像分割需要一个快速和易于使用的分割工具。GIMP内置了分段工具,但在某些情况下,这些工具的性能很差。“GrabCut”是一种创新的分割技术,它同时使用区域和边界信息来进行分割。“GrabCut”算法的几个变体已经作为GIMP的插件实现。使用“GrabCut”获得的结果是可比较的,并且通常比所有其他内置分割工具的结果更好。
{"title":"Implementing the \"GrabCut\" segmentation technique as a plugin for the GIMP","authors":"M. Marsh, S. Bangay, A. Lobb","doi":"10.1145/1108590.1108618","DOIUrl":"https://doi.org/10.1145/1108590.1108618","url":null,"abstract":"Image segmentation requires a segmentation tool that is fast and easy to use. The GIMP has built in segmentation tools, but under some circumstances these tools perform badly. \"GrabCut\" is an innovative segmentation technique that uses both region and boundary information in order to perform segmentation. Several variations on the \"GrabCut\" algorithm have been implemented as a plugin for the GIMP. The results obtained using \"GrabCut\" are comparable, and often better than the results of all the other built in segmentation tools.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126641797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
InetVis, a visual tool for network telescope traffic analysis InetVis,用于网络望远镜流量分析的可视化工具
J. V. Riel, B. Irwin
This article illustrates the merits of visual analysis as it presents preliminary findings using InetVis - an animated 3-D scatter plot visualization of network events. The concepts and features of InetVis are evaluated with reference to related work in the field. Tested against a network scanning tool, anticipated visual signs of port scanning and network mapping serve as a proof of concept. This research also unveils substantial amounts of suspicious activity present in Internet traffic during August 2005, as captured by a class C network telescope. InetVis is found to have promising scalability whilst offering salient depictions of intrusive network activity.
本文说明了可视化分析的优点,因为它展示了使用InetVis(网络事件的动画三维散点图可视化)的初步发现。参考该领域的相关工作,对InetVis的概念和特征进行了评价。针对网络扫描工具进行了测试,预期的端口扫描和网络映射的视觉标志可以作为概念的证明。这项研究还揭示了2005年8月期间互联网流量中存在的大量可疑活动,这些活动是由C级网络望远镜捕获的。人们发现InetVis具有很好的可扩展性,同时提供了对侵入性网络活动的突出描述。
{"title":"InetVis, a visual tool for network telescope traffic analysis","authors":"J. V. Riel, B. Irwin","doi":"10.1145/1108590.1108604","DOIUrl":"https://doi.org/10.1145/1108590.1108604","url":null,"abstract":"This article illustrates the merits of visual analysis as it presents preliminary findings using InetVis - an animated 3-D scatter plot visualization of network events. The concepts and features of InetVis are evaluated with reference to related work in the field. Tested against a network scanning tool, anticipated visual signs of port scanning and network mapping serve as a proof of concept. This research also unveils substantial amounts of suspicious activity present in Internet traffic during August 2005, as captured by a class C network telescope. InetVis is found to have promising scalability whilst offering salient depictions of intrusive network activity.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130773435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Affective scene generation 情感场景生成
C. Hultquist, J. Gain, David E. Cairns
A new technique for generating virtual environments is proposed, whereby the user describes the environment that they wish to create using adjectives. An entire scene is then procedurally generated, based on the mapping of these adjectives to the parameter space of the procedural models used. This mapping is determined through a pre-process, during which the user is presented with a number of scenes and asked to describe them using adjectives. With such a technique, the ability to create complex virtual environments is extended to users with little or no technical knowledge, and additionally provides a means for experienced users to quickly generate a large, complex environment which can then be modified by hand.
提出了一种生成虚拟环境的新技术,即用户使用形容词描述他们希望创建的环境。然后基于这些形容词到所使用的过程模型的参数空间的映射,程序地生成整个场景。这种映射是通过预处理确定的,在此过程中,向用户展示许多场景,并要求他们使用形容词来描述它们。有了这种技术,创建复杂虚拟环境的能力扩展到很少或没有技术知识的用户,并且还为有经验的用户提供了一种方法,可以快速生成一个大型的、复杂的环境,然后可以手工修改。
{"title":"Affective scene generation","authors":"C. Hultquist, J. Gain, David E. Cairns","doi":"10.1145/1108590.1108600","DOIUrl":"https://doi.org/10.1145/1108590.1108600","url":null,"abstract":"A new technique for generating virtual environments is proposed, whereby the user describes the environment that they wish to create using adjectives. An entire scene is then procedurally generated, based on the mapping of these adjectives to the parameter space of the procedural models used. This mapping is determined through a pre-process, during which the user is presented with a number of scenes and asked to describe them using adjectives. With such a technique, the ability to create complex virtual environments is extended to users with little or no technical knowledge, and additionally provides a means for experienced users to quickly generate a large, complex environment which can then be modified by hand.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128268668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Formal specification of region-based model for semantic extraction in road traffic monitoring 道路交通监测中基于区域的语义提取模型的形式化规范
Johan Köhler, J. Tapamo
This work forms part of the development of a framework for semantic extraction in road traffic monitoring. In this paper we develop a scene, object and event model based on regions in the ground plane. The model is formally specified using the Güting spatio-temporal formalism for moving regions and Z notation. The result is domain-independent knowledge representation that supports reasoning about time-varying regions and that is expressed in an accessible mathematical formalism.
这项工作构成了道路交通监控中语义提取框架开发的一部分。本文建立了一个基于地平面区域的场景、物体和事件模型。该模型使用用于移动区域的g ting时空形式和Z符号进行正式指定。结果是支持时变区域推理的独立于领域的知识表示,并以易于理解的数学形式表示。
{"title":"Formal specification of region-based model for semantic extraction in road traffic monitoring","authors":"Johan Köhler, J. Tapamo","doi":"10.1145/1108590.1108615","DOIUrl":"https://doi.org/10.1145/1108590.1108615","url":null,"abstract":"This work forms part of the development of a framework for semantic extraction in road traffic monitoring. In this paper we develop a scene, object and event model based on regions in the ground plane. The model is formally specified using the Güting spatio-temporal formalism for moving regions and Z notation. The result is domain-independent knowledge representation that supports reasoning about time-varying regions and that is expressed in an accessible mathematical formalism.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130054457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identification and reconstruction of bullets from multiple X-rays 通过多次x光对子弹进行识别和重建
Simon J. Perkins, P. Marais
We present a framework for the rapid detection and 3D localisation of bullets (or other compact shapes) from a sparse set of cross-sectional patient x-rays. The intention of this work is to assess a software architecture for an application specific alternative to conventional CT which can be leveraged in poor communities using less expensive technology. Of necessity such a system will not provide the diagnostic sophistication of full CT, but in many cases this added complexity may not be required. While a pair of x-rays can provide some 3D positional information to a clinician, such an assessment is qualitative and occluding tissue/bone may lead to an incorrect assessment of the internal location of the bullet.Our system uses a combination of model-based segmentation and CT-like back-projection to arrive at an approximate volume representation of the embedded shape, based on a sequence of x-rays which encompasses the affected area. Depending on the nature of the injury, such a 3D shape approximation may provide sufficient information for surgical intervention.The results of our proof-of-concept study show that, algorithmically, such system is indeed realisable: a 3D reconstruction is possible from a small set of x-rays, with only a small computational load. A combination of real x-rays and simulated 3D data are used to evaluate the technique.
我们提出了一个框架,用于从稀疏的横截面患者x射线中快速检测和3D定位子弹(或其他紧凑形状)。这项工作的目的是评估一种软件架构,用于替代传统CT的特定应用,这种软件架构可以使用更便宜的技术在贫困社区中加以利用。这种系统必然不能提供全CT的复杂诊断,但在许多情况下,这种增加的复杂性可能是不必要的。虽然一对x光片可以为临床医生提供一些3D位置信息,但这种评估是定性的,闭塞的组织/骨骼可能导致对子弹内部位置的错误评估。我们的系统结合了基于模型的分割和类似ct的反向投影,根据包含受影响区域的x射线序列,得到嵌入形状的近似体积表示。根据损伤的性质,这样的三维形状近似可以为手术干预提供足够的信息。我们的概念验证研究结果表明,从算法上讲,这样的系统确实是可以实现的:只需要很小的计算负荷,就可以从一小组x射线中进行3D重建。使用真实x射线和模拟三维数据的组合来评估该技术。
{"title":"Identification and reconstruction of bullets from multiple X-rays","authors":"Simon J. Perkins, P. Marais","doi":"10.1145/1108590.1108610","DOIUrl":"https://doi.org/10.1145/1108590.1108610","url":null,"abstract":"We present a framework for the rapid detection and 3D localisation of bullets (or other compact shapes) from a sparse set of cross-sectional patient x-rays. The intention of this work is to assess a software architecture for an application specific alternative to conventional CT which can be leveraged in poor communities using less expensive technology. Of necessity such a system will not provide the diagnostic sophistication of full CT, but in many cases this added complexity may not be required. While a pair of x-rays can provide some 3D positional information to a clinician, such an assessment is qualitative and occluding tissue/bone may lead to an incorrect assessment of the internal location of the bullet.Our system uses a combination of model-based segmentation and CT-like back-projection to arrive at an approximate volume representation of the embedded shape, based on a sequence of x-rays which encompasses the affected area. Depending on the nature of the injury, such a 3D shape approximation may provide sufficient information for surgical intervention.The results of our proof-of-concept study show that, algorithmically, such system is indeed realisable: a 3D reconstruction is possible from a small set of x-rays, with only a small computational load. A combination of real x-rays and simulated 3D data are used to evaluate the technique.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"204 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116182598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Cost prediction for global illumination using a fast rasterised scene preview 使用快速光栅化场景预览的全局照明成本预测
R. Gillibrand, P. Longhurst, K. Debattista, A. Chalmers
The media industry is demanding increasing fidelity for their rendered images. Despite the advent of modern GPUs, the computational requirements of physically based global illumination algorithms are such that it is still not possible to render high-fidelity images in real time. The time constraints of commercial rendering are such that the user would like to have an idea as to just how long it will take to render an animated sequence, prior the actual rendering. This information is necessary to determine whether the desired quality is achievable in the time available or indeed if it is possible to afford to carry out the work on a render farm for example. This paper presents a comparison of different pixel profiling strategies which may be used to predict the overall rendering cost of a high fidelity global illumination solution. A fast rasterised scene preview is proposed which provides a more accurate positioning and weighting of samples, to achieve accurate cost prediction.
媒体行业要求提高渲染图像的保真度。尽管现代gpu的出现,基于物理的全局照明算法的计算需求是这样的,它仍然不可能实时渲染高保真图像。商业渲染的时间限制是这样的,用户希望在实际渲染之前知道渲染动画序列需要多长时间。这些信息对于确定是否在可用的时间内实现所需的质量,或者是否确实有可能负担得起在渲染农场进行工作是必要的。本文提出了不同的像素剖析策略,可用于预测高保真全局照明解决方案的总体渲染成本的比较。提出了一种快速栅格化场景预览方法,提供了更准确的样本定位和加权,以实现准确的成本预测。
{"title":"Cost prediction for global illumination using a fast rasterised scene preview","authors":"R. Gillibrand, P. Longhurst, K. Debattista, A. Chalmers","doi":"10.1145/1108590.1108597","DOIUrl":"https://doi.org/10.1145/1108590.1108597","url":null,"abstract":"The media industry is demanding increasing fidelity for their rendered images. Despite the advent of modern GPUs, the computational requirements of physically based global illumination algorithms are such that it is still not possible to render high-fidelity images in real time. The time constraints of commercial rendering are such that the user would like to have an idea as to just how long it will take to render an animated sequence, prior the actual rendering. This information is necessary to determine whether the desired quality is achievable in the time available or indeed if it is possible to afford to carry out the work on a render farm for example. This paper presents a comparison of different pixel profiling strategies which may be used to predict the overall rendering cost of a high fidelity global illumination solution. A fast rasterised scene preview is proposed which provides a more accurate positioning and weighting of samples, to achieve accurate cost prediction.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122163963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Duplicating road patterns in south african informal settlements using procedural techniques 使用程序技术复制南非非正式住区的道路模式
Kevin R. Glass, C. Morkel, S. Bangay
The formation of informal settlements in and around urban complexes has largely been ignored in the context of procedural city modeling. However, many cities in South Africa and globally can attest to the presence of such settlements. This paper analyses the phenomenon of informal settlements from a procedural modeling perspective. Aerial photography from two South African urban complexes, namely Johannesburg and Cape Town is used as a basis for the extraction of various features that distinguish different types of settlements. In particular, the road patterns which have formed within such settlements are analysed, and various procedural techniques proposed (including Voronoi diagrams, subdivision and L-systems) to replicate the identified features. A qualitative assessment of the procedural techniques is provided, and the most suitable combination of techniques identified for unstructured and structured settlements. In particular it is found that a combination of Voronoi diagrams and subdivision provides the closest match to unstructured informal settlements. A combination of L-systems, Voronoi diagrams and subdivision is found to produce the closest pattern to a structured informal settlement.
在程序性城市建模的背景下,城市综合体内部和周围非正式住区的形成在很大程度上被忽视了。然而,南非和全球的许多城市都可以证明这种定居点的存在。本文从程序建模的角度对非正式住区现象进行了分析。来自南非两个城市综合体,即约翰内斯堡和开普敦的航空摄影被用作提取区分不同类型住区的各种特征的基础。特别是,分析了在这些定居点内形成的道路模式,并提出了各种程序技术(包括Voronoi图、细分和l -系统)来复制已确定的特征。对程序技术进行了质量评价,并为非结构化和结构化住区确定了最合适的技术组合。特别是发现Voronoi图和细分的结合提供了与非结构化非正式住区最接近的匹配。l系统、Voronoi图和细分的组合被发现产生了最接近结构化非正式定居点的模式。
{"title":"Duplicating road patterns in south african informal settlements using procedural techniques","authors":"Kevin R. Glass, C. Morkel, S. Bangay","doi":"10.1145/1108590.1108616","DOIUrl":"https://doi.org/10.1145/1108590.1108616","url":null,"abstract":"The formation of informal settlements in and around urban complexes has largely been ignored in the context of procedural city modeling. However, many cities in South Africa and globally can attest to the presence of such settlements. This paper analyses the phenomenon of informal settlements from a procedural modeling perspective. Aerial photography from two South African urban complexes, namely Johannesburg and Cape Town is used as a basis for the extraction of various features that distinguish different types of settlements. In particular, the road patterns which have formed within such settlements are analysed, and various procedural techniques proposed (including Voronoi diagrams, subdivision and L-systems) to replicate the identified features. A qualitative assessment of the procedural techniques is provided, and the most suitable combination of techniques identified for unstructured and structured settlements. In particular it is found that a combination of Voronoi diagrams and subdivision provides the closest match to unstructured informal settlements. A combination of L-systems, Voronoi diagrams and subdivision is found to produce the closest pattern to a structured informal settlement.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131761073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
期刊
International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1