首页 > 最新文献

International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa最新文献

英文 中文
A 3D visual analysis tool in support of the SANDF's growing ground based air defence simulation capability 3D可视化分析工具,支持SANDF不断增长的地面防空模拟能力
B. Duvenhage, J. Delport, A. Louis
A 3D visual analysis tool has been developed to add value to the SANDF's growing Ground Based Air Defence (GBAD) System of Systems simulation capability. A time based XML interface between the simulation and analysis tool, via a TCP connection or a log file, allows individual simulation objects to be wholly updated or partially modified. Live pause and review of the simulation action is supported by employing data key frames and compressed XML for enhanced performance. An innovative configurable filter tree allows visual clutter to be reduced as required and an open source scene graph (OpenSceneGraph) manages the 3D scene representation and rendering. A visualisation capability is developed for the effective presentation of the dynamic air defence system behaviour, system state transitions and inter-system communication. The visual analysis tool has successfully been applied in support of system performance experiments, tactical doctrine development and simulation support during training and live field exercises. The 3D visualisation resulted in improved situational awareness during experiment analysis, in increased involvement of the SANDF in experiment analysis and in improved credibility of analysis results presented during live or after action visual feedback sessions.
一种3D可视化分析工具已经开发出来,为SANDF不断增长的地基防空(GBAD)系统仿真能力增加价值。仿真和分析工具之间基于时间的XML接口(通过TCP连接或日志文件)允许对单个仿真对象进行全部更新或部分修改。通过使用数据关键帧和压缩XML来增强性能,支持模拟操作的实时暂停和检查。创新的可配置过滤器树允许根据需要减少视觉混乱,开源场景图(OpenSceneGraph)管理3D场景表示和渲染。开发了一种可视化能力,用于有效地表示动态防空系统行为、系统状态转换和系统间通信。可视化分析工具已成功应用于支持系统性能实验、战术理论制定和模拟支持训练和现场演习。3D可视化改善了实验分析过程中的态势感知,增加了SANDF在实验分析中的参与,提高了现场或行动后视觉反馈会议中呈现的分析结果的可信度。
{"title":"A 3D visual analysis tool in support of the SANDF's growing ground based air defence simulation capability","authors":"B. Duvenhage, J. Delport, A. Louis","doi":"10.1145/1294685.1294692","DOIUrl":"https://doi.org/10.1145/1294685.1294692","url":null,"abstract":"A 3D visual analysis tool has been developed to add value to the SANDF's growing Ground Based Air Defence (GBAD) System of Systems simulation capability. A time based XML interface between the simulation and analysis tool, via a TCP connection or a log file, allows individual simulation objects to be wholly updated or partially modified. Live pause and review of the simulation action is supported by employing data key frames and compressed XML for enhanced performance. An innovative configurable filter tree allows visual clutter to be reduced as required and an open source scene graph (OpenSceneGraph) manages the 3D scene representation and rendering.\u0000 A visualisation capability is developed for the effective presentation of the dynamic air defence system behaviour, system state transitions and inter-system communication. The visual analysis tool has successfully been applied in support of system performance experiments, tactical doctrine development and simulation support during training and live field exercises. The 3D visualisation resulted in improved situational awareness during experiment analysis, in increased involvement of the SANDF in experiment analysis and in improved credibility of analysis results presented during live or after action visual feedback sessions.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114838595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Extensible approach to the virtual worlds editing 可扩展的虚拟世界编辑方法
V. Kovalcík, J. Flašar, Jirí Sochor
We present a virtual reality framework (VRECKO) with an editor that is capable of creating new scenes or applications using this framework. The VRECKO system consists of objects with predefined behaviors that an application designer can dynamically change. With instances of a special object type called Ability, we may extend or change behaviors of objects in a scene. As an example of this approach, we present an editor that we implemented entirely as a set of abilities. The editing is done directly in 3D environment which has several benefits over the 2D editing, particularly the possibility to work with a scene exactly as in the final application.
我们提出了一个虚拟现实框架(VRECKO)与编辑器,能够创建新的场景或应用程序使用这个框架。VRECKO系统由具有预定义行为的对象组成,应用程序设计人员可以动态更改这些对象。有了一个叫做Ability的特殊对象类型的实例,我们可以扩展或改变场景中对象的行为。作为这种方法的一个例子,我们展示了一个编辑器,我们完全将其实现为一组功能。编辑直接在3D环境中完成,这比2D编辑有几个好处,特别是在最终应用程序中与场景完全相同的可能性。
{"title":"Extensible approach to the virtual worlds editing","authors":"V. Kovalcík, J. Flašar, Jirí Sochor","doi":"10.1145/1294685.1294691","DOIUrl":"https://doi.org/10.1145/1294685.1294691","url":null,"abstract":"We present a virtual reality framework (VRECKO) with an editor that is capable of creating new scenes or applications using this framework. The VRECKO system consists of objects with predefined behaviors that an application designer can dynamically change. With instances of a special object type called Ability, we may extend or change behaviors of objects in a scene. As an example of this approach, we present an editor that we implemented entirely as a set of abilities. The editing is done directly in 3D environment which has several benefits over the 2D editing, particularly the possibility to work with a scene exactly as in the final application.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117099724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Mechanisms for multimodality: taking fiction to another dimension 多模态机制:将小说带入另一个维度
Kevin R. Glass, S. Bangay, B. Alcock
We present methods for automatically constructing representations of fiction books in a range of modalities: audibly, graphically and as 3D virtual environments. The correspondence between the sequential ordering of events against the order of events presented in the text is used to correctly resolve the dynamic interactions for each representation. Synthesised audio created from the fiction text is used to calibrate the base time-line against which the other forms of media are correctly aligned. The audio stream is based on speech synthesis using the text of the book, and is enhanced using distinct voices for the different characters in a book. Sound effects are included automatically. The graphical representation represents the text (as subtitles), identifies active characters and provides visual feedback of the content of the story. Dynamic virtual environments conform to the constraints implied by the story, and are used as a source of further visual content. These representations are all aligned to a common time-line, and combined using sequencing facilities to provide a multimodal version of the original text.
我们提出了在一系列模式下自动构建小说书籍表示的方法:听觉、图形和3D虚拟环境。事件的顺序顺序与文本中呈现的事件顺序之间的对应关系用于正确解析每种表示的动态交互。从小说文本生成的合成音频用于校准基本时间线,以使其他形式的媒体正确对齐。音频流基于使用书中的文本的语音合成,并使用书中不同角色的不同声音进行增强。声音效果包括自动。图形表示表示文本(作为字幕),识别活动角色并提供故事内容的视觉反馈。动态虚拟环境符合故事所隐含的约束,并被用作进一步视觉内容的来源。这些表示都与一个共同的时间线对齐,并使用排序设施进行组合,以提供原始文本的多模式版本。
{"title":"Mechanisms for multimodality: taking fiction to another dimension","authors":"Kevin R. Glass, S. Bangay, B. Alcock","doi":"10.1145/1294685.1294708","DOIUrl":"https://doi.org/10.1145/1294685.1294708","url":null,"abstract":"We present methods for automatically constructing representations of fiction books in a range of modalities: audibly, graphically and as 3D virtual environments. The correspondence between the sequential ordering of events against the order of events presented in the text is used to correctly resolve the dynamic interactions for each representation. Synthesised audio created from the fiction text is used to calibrate the base time-line against which the other forms of media are correctly aligned. The audio stream is based on speech synthesis using the text of the book, and is enhanced using distinct voices for the different characters in a book. Sound effects are included automatically. The graphical representation represents the text (as subtitles), identifies active characters and provides visual feedback of the content of the story. Dynamic virtual environments conform to the constraints implied by the story, and are used as a source of further visual content. These representations are all aligned to a common time-line, and combined using sequencing facilities to provide a multimodal version of the original text.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122619163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Animating physically based explosions in real-time 实时制作基于物理的爆炸动画
L. Ek, Rune Vistnes, Odd Erik Gundersen
We present a framework for real-time animation of explosions that runs completely on the GPU. The simulation allows for arbitrary internal boundaries and is governed by a combustion process, a Stable Fluid solver, which includes thermal expansion, and turbulence modeling. The simulation results are visualised by two particle systems rendered using animated textures. The results are physically based, non-repeating, and dynamic real-time explosions with high visual quality.
我们提出了一个框架,爆炸的实时动画,完全运行在GPU上。模拟允许任意内部边界,并由燃烧过程、稳定流体求解器(包括热膨胀)和湍流建模控制。模拟结果由两个使用动画纹理渲染的粒子系统可视化。结果是基于物理的,不重复的,具有高视觉质量的动态实时爆炸。
{"title":"Animating physically based explosions in real-time","authors":"L. Ek, Rune Vistnes, Odd Erik Gundersen","doi":"10.1145/1294685.1294696","DOIUrl":"https://doi.org/10.1145/1294685.1294696","url":null,"abstract":"We present a framework for real-time animation of explosions that runs completely on the GPU. The simulation allows for arbitrary internal boundaries and is governed by a combustion process, a Stable Fluid solver, which includes thermal expansion, and turbulence modeling. The simulation results are visualised by two particle systems rendered using animated textures. The results are physically based, non-repeating, and dynamic real-time explosions with high visual quality.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115977702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Light field propagation and rendering on the GPU 在GPU上的光场传播和渲染
J. Mortensen, Pankaj Khanna, M. Slater
This paper describes an algorithm that provides fast propagation and real-time walkthrough for globally illuminated synthetic scenes. A type of light field data structure is used for propagating radiance outward from emitters through the scene, accounting for any kind of L(S|D) light path. The light field employed is constructed by choosing a regular point subdivision over a hemisphere, to give a set of directions, and then corresponding to each direction there is a rectangular grid of parallel rays. Each rectangular grid of rays is further subdivided into rectangular tiles, such that each tile references a sequence of 2D images containing outgoing radiances of surfaces intersected by the rays in that tile. We present a novel propagation algorithm running entirely on the Graphics Processing Unit (GPU). It is incremental in that it can resolve visibility along a set of parallel rays in O(n) time and can produce a light field for a moderately complex scene - with complex illumination stored in millions of elements - in minutes and for simpler scenes in seconds. It is approximate but gracefully converges to a correct solution as verified by comparing images with path traced counterparts. We show how to render globally lit images directly from the GPU data structure without CPU involvement at real-time frame rates and high resolutions.
本文提出了一种全局照明合成场景快速传播和实时演练的算法。一种光场数据结构用于从发射器向外传播辐射通过场景,用于任何类型的L(S|D)光路。所使用的光场是通过在一个半球上选择一个规则的点细分来构造的,给出一组方向,然后对应于每个方向有一个平行光线的矩形网格。每个射线的矩形网格进一步细分为矩形瓦片,使得每个瓦片引用一系列2D图像,其中包含由该瓦片中的射线相交的表面的出射辐射。我们提出了一种完全在图形处理单元(GPU)上运行的新型传播算法。它是增量的,因为它可以在O(n)时间内解决一组平行光线的可见性,并且可以在几分钟内产生中等复杂场景的光场-复杂的照明存储在数百万个元素中-对于更简单的场景,可以在几秒钟内产生光场。它是近似的,但优雅地收敛到一个正确的解决方案,通过比较图像与路径跟踪对应的验证。我们展示了如何在实时帧率和高分辨率下直接从GPU数据结构渲染全局光照图像,而无需CPU参与。
{"title":"Light field propagation and rendering on the GPU","authors":"J. Mortensen, Pankaj Khanna, M. Slater","doi":"10.1145/1294685.1294688","DOIUrl":"https://doi.org/10.1145/1294685.1294688","url":null,"abstract":"This paper describes an algorithm that provides fast propagation and real-time walkthrough for globally illuminated synthetic scenes. A type of light field data structure is used for propagating radiance outward from emitters through the scene, accounting for any kind of L(S|D) light path. The light field employed is constructed by choosing a regular point subdivision over a hemisphere, to give a set of directions, and then corresponding to each direction there is a rectangular grid of parallel rays. Each rectangular grid of rays is further subdivided into rectangular tiles, such that each tile references a sequence of 2D images containing outgoing radiances of surfaces intersected by the rays in that tile. We present a novel propagation algorithm running entirely on the Graphics Processing Unit (GPU). It is incremental in that it can resolve visibility along a set of parallel rays in O(n) time and can produce a light field for a moderately complex scene - with complex illumination stored in millions of elements - in minutes and for simpler scenes in seconds. It is approximate but gracefully converges to a correct solution as verified by comparing images with path traced counterparts. We show how to render globally lit images directly from the GPU data structure without CPU involvement at real-time frame rates and high resolutions.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126824262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Cloth simulation and collision detection using geometry images 布料模拟和碰撞检测使用几何图像
Nico Zink, A. Hardy
The simulation and animation of cloth has attracted considerable research interest by the computer graphics community. Cloth that behaves realistically is already expected in animated films, and real-time applications are certain to follow. A common challenge faced when simulating the complex behaviour of cloth, especially at interactive frame rates, is maintaining an acceptable level of realism while keeping computation time to a minimum. A common method of increasing the efficiency is a decrease in the number of nodes controlling the cloth movement, sacrificing details that could only be obtained using a dense discretization of the cloth. A simple and efficient method to simulate cloth is the mass-spring system which utilises a regular grid of vertices, representing discrete points along the cloth's surface. The structure of geometry images is similar, which makes them an ideal choice for representing arbitrary surface meshes in a cloth simulator whilst retaining the efficiency of a mass-spring system. In this paper we present a novel method to apply geometry images to cloth simulation in order to obtain cloth motion for surface meshes of arbitrary genus, while retaining the simplicity of a mass-spring model. We also adapt an implicit/explicit integration scheme, utilising the regular structure of geometry images, to improve performance. Additionally, the cloth is able to drape over other objects, also represented as geometry images. Our method is efficient enough to allow for fairly dense cloth meshes to be simulated in real-time.
布料的仿真和动画已经引起了计算机图形学界的极大研究兴趣。动画电影中已经出现了具有逼真行为的布料,而实时应用肯定会紧随其后。当模拟布料的复杂行为时,特别是在交互帧率下,面临的一个常见挑战是在保持可接受的真实感水平的同时将计算时间保持在最低限度。提高效率的一种常用方法是减少控制布料运动的节点数量,牺牲只能使用布料密集离散化才能获得的细节。一种简单而有效的模拟布料的方法是质量-弹簧系统,它利用一个规则的顶点网格,代表沿布料表面的离散点。几何图像的结构是相似的,这使得它们成为在布料模拟器中表示任意表面网格的理想选择,同时保留了质量弹簧系统的效率。在保留质量-弹簧模型的简洁性的前提下,提出了一种将几何图像应用于布料模拟的新方法,以获得任意属表面网格的布料运动。我们还采用隐式/显式集成方案,利用几何图像的规则结构来提高性能。此外,布料可以覆盖在其他物体上,也可以表示为几何图像。我们的方法足够有效,可以实时模拟相当密集的布网格。
{"title":"Cloth simulation and collision detection using geometry images","authors":"Nico Zink, A. Hardy","doi":"10.1145/1294685.1294716","DOIUrl":"https://doi.org/10.1145/1294685.1294716","url":null,"abstract":"The simulation and animation of cloth has attracted considerable research interest by the computer graphics community. Cloth that behaves realistically is already expected in animated films, and real-time applications are certain to follow. A common challenge faced when simulating the complex behaviour of cloth, especially at interactive frame rates, is maintaining an acceptable level of realism while keeping computation time to a minimum. A common method of increasing the efficiency is a decrease in the number of nodes controlling the cloth movement, sacrificing details that could only be obtained using a dense discretization of the cloth. A simple and efficient method to simulate cloth is the mass-spring system which utilises a regular grid of vertices, representing discrete points along the cloth's surface. The structure of geometry images is similar, which makes them an ideal choice for representing arbitrary surface meshes in a cloth simulator whilst retaining the efficiency of a mass-spring system. In this paper we present a novel method to apply geometry images to cloth simulation in order to obtain cloth motion for surface meshes of arbitrary genus, while retaining the simplicity of a mass-spring model. We also adapt an implicit/explicit integration scheme, utilising the regular structure of geometry images, to improve performance. Additionally, the cloth is able to drape over other objects, also represented as geometry images. Our method is efficient enough to allow for fairly dense cloth meshes to be simulated in real-time.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128392191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Embedded labels for line features in interactive 3D virtual environments 在交互式3D虚拟环境中嵌入线特征标签
S. Maass, J. Döllner
This paper presents a novel method for labeling line features in interactive virtual 3D environments. It embeds labels into the surfaces of the annotated objects, whereas occlusion by other scene elements is minimized and overlaps between labels are resolved. Embedded labels provide a high correlation between label and annotated object -- they are specifically useful in environments, where available screen-space for annotations is limited (e.g., small displays). To determine optimal positions for the annotation of line features, the degree of occlusion for each position is estimated during the real-time rendering process. We discuss a number of sampling schemes that are used to approximate the visibility measure, including an adapted variant that is particularly suitable for the integration of text based on Latin alphabets. Overlaps between embedded labels are resolved with a conflict graph, which is calculated in a preprocessing step and stores all possible overlap conflicts. To prove the applicability of our approach, we have implemented a prototype application that visualizes street names as embedded labels within a 3D virtual city model in real-time.
提出了一种在交互式虚拟三维环境中标记直线特征的新方法。它将标签嵌入到标注对象的表面,而其他场景元素的遮挡被最小化,标签之间的重叠被解决。嵌入式标签在标签和注释对象之间提供了高度的相关性——它们在可用于注释的屏幕空间有限的环境中特别有用(例如,小显示器)。为了确定线特征标注的最佳位置,在实时绘制过程中估计每个位置的遮挡程度。我们讨论了一些用于近似可见性度量的采样方案,包括一个特别适合基于拉丁字母的文本集成的改编变体。使用冲突图解决嵌入标签之间的重叠,冲突图在预处理步骤中计算并存储所有可能的重叠冲突。为了证明我们方法的适用性,我们实现了一个原型应用程序,该应用程序将街道名称实时可视化为3D虚拟城市模型中的嵌入式标签。
{"title":"Embedded labels for line features in interactive 3D virtual environments","authors":"S. Maass, J. Döllner","doi":"10.1145/1294685.1294695","DOIUrl":"https://doi.org/10.1145/1294685.1294695","url":null,"abstract":"This paper presents a novel method for labeling line features in interactive virtual 3D environments. It embeds labels into the surfaces of the annotated objects, whereas occlusion by other scene elements is minimized and overlaps between labels are resolved. Embedded labels provide a high correlation between label and annotated object -- they are specifically useful in environments, where available screen-space for annotations is limited (e.g., small displays). To determine optimal positions for the annotation of line features, the degree of occlusion for each position is estimated during the real-time rendering process. We discuss a number of sampling schemes that are used to approximate the visibility measure, including an adapted variant that is particularly suitable for the integration of text based on Latin alphabets. Overlaps between embedded labels are resolved with a conflict graph, which is calculated in a preprocessing step and stores all possible overlap conflicts. To prove the applicability of our approach, we have implemented a prototype application that visualizes street names as embedded labels within a 3D virtual city model in real-time.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133490027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Interpolatory √3 subdivision with harmonic interpolation 插值√3细分与谐波插值
A. Hardy
A variation on the interpolatory subdivision scheme [Labsik and Greiner 2000] is presented based on √3 subdivision and harmonic interpolation. Harmonic interpolation is generalized to triangle meshes based on a distance representation of the basis functions. The harmonic surface is approximated by limiting the support of the basis functions and the resulting surface is shown to satisfy necessary conditions for continuity. We provide subdivision rules for vertices of valence 3, 4 and 6 that can be applied directly to obtain a smooth surface. Other valences are handled as described in the literature. The resulting algorithm is easily implemented due to √3 subdivision and the simplicity of the stencils involved.
提出了一种基于√3细分和谐波插值的插值细分方案[Labsik and Greiner 2000]的变体。基于基函数的距离表示,将调和插值推广到三角形网格。通过限制基函数的支持来近似调和曲面,并证明了调和曲面满足连续性的必要条件。我们提供了价3、价4和价6顶点的细分规则,这些规则可以直接应用于获得光滑表面。其他价目按文献中所述处理。由于√3细分和所涉及的模板的简单性,所得到的算法很容易实现。
{"title":"Interpolatory √3 subdivision with harmonic interpolation","authors":"A. Hardy","doi":"10.1145/1294685.1294701","DOIUrl":"https://doi.org/10.1145/1294685.1294701","url":null,"abstract":"A variation on the interpolatory subdivision scheme [Labsik and Greiner 2000] is presented based on √3 subdivision and harmonic interpolation. Harmonic interpolation is generalized to triangle meshes based on a distance representation of the basis functions. The harmonic surface is approximated by limiting the support of the basis functions and the resulting surface is shown to satisfy necessary conditions for continuity. We provide subdivision rules for vertices of valence 3, 4 and 6 that can be applied directly to obtain a smooth surface. Other valences are handled as described in the literature. The resulting algorithm is easily implemented due to √3 subdivision and the simplicity of the stencils involved.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122006482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High dynamic range preserving compression of light fields and reflectance fields 高动态范围保持压缩的光场和反射场
N. Menzel, M. Guthe
Surface structures at meso- and micro-scale are almost impossible to convincingly reproduce with analytical BRDFs. Therefore, image-based methods like light fields, surface light fields, reflectance fields and bidirectional texture functions became widely accepted to represent spatially nonuniform surfaces. For all of these techniques a set of input photographs from varying view and/or light directions is taken that usually by far exceeds the available graphics memory. The recent development of HDR photography additionally increased the amount of data generated by current acquisition systems since every image needs to be stored as an array of floating point numbers. Furthermore, statistical compression methods -- like principal component analysis (PCA) -- that are commonly used for compression are optimal for linearly distributed values and thus cannot handle the high dynamic range radiance values appropriately. In this paper, we address both of these problems introduced by the acquisition of high dynamic range light and reflectance fields. Instead of directly compressing the radiance data with a truncated PCA, a non-linear transformation is applied to input values in advance to assure an almost uniform distribution. This does not only significantly improve the approximation quality after an arbitrary tone mapping operator is applied to the reconstructed HDR images, but also allows to efficiently quantize the principal components and even apply hardware-supported texture compression without much further loss of quality. Thus, in addition to the improved visual quality, the storage requirements are reduced by more than an order of magnitude.
中观和微观尺度的表面结构几乎不可能令人信服地再现与分析brdf。因此,基于图像的光场、表面光场、反射场和双向纹理函数等方法被广泛接受来表示空间非均匀表面。对于所有这些技术,从不同的视角和/或光线方向拍摄的一组输入照片通常远远超过可用的图形存储器。HDR摄影的最新发展进一步增加了当前采集系统生成的数据量,因为每个图像都需要以浮点数数组的形式存储。此外,通常用于压缩的统计压缩方法——如主成分分析(PCA)——对线性分布的值是最佳的,因此不能适当地处理高动态范围的亮度值。在本文中,我们解决了高动态范围光场和反射场的采集所带来的这两个问题。采用截断的主成分分析直接压缩辐射数据,而是对输入值进行非线性变换,以保证辐射数据的均匀分布。这不仅显著提高了对重构HDR图像应用任意色调映射算子后的近似质量,而且还允许有效地量化主成分,甚至在不进一步损失质量的情况下应用硬件支持的纹理压缩。因此,除了提高视觉质量外,存储需求减少了一个数量级以上。
{"title":"High dynamic range preserving compression of light fields and reflectance fields","authors":"N. Menzel, M. Guthe","doi":"10.1145/1294685.1294697","DOIUrl":"https://doi.org/10.1145/1294685.1294697","url":null,"abstract":"Surface structures at meso- and micro-scale are almost impossible to convincingly reproduce with analytical BRDFs. Therefore, image-based methods like light fields, surface light fields, reflectance fields and bidirectional texture functions became widely accepted to represent spatially nonuniform surfaces. For all of these techniques a set of input photographs from varying view and/or light directions is taken that usually by far exceeds the available graphics memory. The recent development of HDR photography additionally increased the amount of data generated by current acquisition systems since every image needs to be stored as an array of floating point numbers. Furthermore, statistical compression methods -- like principal component analysis (PCA) -- that are commonly used for compression are optimal for linearly distributed values and thus cannot handle the high dynamic range radiance values appropriately.\u0000 In this paper, we address both of these problems introduced by the acquisition of high dynamic range light and reflectance fields. Instead of directly compressing the radiance data with a truncated PCA, a non-linear transformation is applied to input values in advance to assure an almost uniform distribution. This does not only significantly improve the approximation quality after an arbitrary tone mapping operator is applied to the reconstructed HDR images, but also allows to efficiently quantize the principal components and even apply hardware-supported texture compression without much further loss of quality. Thus, in addition to the improved visual quality, the storage requirements are reduced by more than an order of magnitude.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121370259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A multiresolution object space point-based rendering approach for mobile devices 一种用于移动设备的基于多分辨率对象空间点的渲染方法
Zhiying He, Xiaohui Liang
The limitation of resource on mobile devices makes providing real-time, realistic 3D graphics on local become a challenging task. Recent researches focus on remote rendering which is not good in interaction, and simple rendering on local which is not good in rendering quality. As for this challenge, this paper presents a new multiresolution object space point-based rendering approach for mobile devices local rendering. The approach use hierarchical clustering to create a hierarchy of bounding volumes, in addition, we use curvature sampling to reduce more amounts of sample points and give a rapid LOD selection algorithm. Then use view-independent object space surface splatting as the rendering primitives which can provide good rendering quality. Experiment results show that this approach uses less time and gets better rendering quality for mobile devices.
移动设备的资源限制使得在本地提供实时、逼真的3D图形成为一项具有挑战性的任务。目前的研究主要集中在远程渲染和局部简单渲染两方面,前者交互性差,后者渲染质量差。针对这一挑战,本文提出了一种新的基于多分辨率目标空间点的移动设备局部渲染方法。该方法采用分层聚类的方法来建立边界体的层次结构,并采用曲率采样的方法来减少样本点的数量,给出了一种快速的LOD选择算法。然后使用与视图无关的对象空间表面喷溅作为渲染原语,可以提供良好的渲染质量。实验结果表明,该方法在移动设备上使用时间更短,呈现质量更好。
{"title":"A multiresolution object space point-based rendering approach for mobile devices","authors":"Zhiying He, Xiaohui Liang","doi":"10.1145/1294685.1294687","DOIUrl":"https://doi.org/10.1145/1294685.1294687","url":null,"abstract":"The limitation of resource on mobile devices makes providing real-time, realistic 3D graphics on local become a challenging task. Recent researches focus on remote rendering which is not good in interaction, and simple rendering on local which is not good in rendering quality. As for this challenge, this paper presents a new multiresolution object space point-based rendering approach for mobile devices local rendering. The approach use hierarchical clustering to create a hierarchy of bounding volumes, in addition, we use curvature sampling to reduce more amounts of sample points and give a rapid LOD selection algorithm. Then use view-independent object space surface splatting as the rendering primitives which can provide good rendering quality. Experiment results show that this approach uses less time and gets better rendering quality for mobile devices.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128755172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1