首页 > 最新文献

Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games最新文献

英文 中文
Fast light-map computation with virtual polygon lights 基于虚拟多边形光的快速光贴图计算
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448210
Christian Luksch, R. Tobler, R. Habel, M. Schwärzler, M. Wimmer
We propose a new method for the fast computation of light maps using a many-light global-illumination solution. A complete scene can be light mapped on the order of seconds to minutes, allowing fast and consistent previews for editing or even generation at loading time. In our method, virtual point lights are clustered into a set of virtual polygon lights, which represent a compact description of the illumination in the scene. The actual light-map generation is performed directly on the GPU. Our approach degrades gracefully, avoiding objectionable artifacts even for very short computation times.
我们提出了一种基于多光全局照明的快速计算光贴图的新方法。一个完整的场景可以在几秒到几分钟的顺序上进行光映射,允许快速和一致的预览进行编辑,甚至在加载时生成。在我们的方法中,虚拟点灯被聚类成一组虚拟多边形灯,这些虚拟多边形灯代表了场景中照明的紧凑描述。实际的光贴图生成是直接在GPU上执行的。我们的方法优雅地降级,即使在非常短的计算时间内也避免了令人反感的工件。
{"title":"Fast light-map computation with virtual polygon lights","authors":"Christian Luksch, R. Tobler, R. Habel, M. Schwärzler, M. Wimmer","doi":"10.1145/2448196.2448210","DOIUrl":"https://doi.org/10.1145/2448196.2448210","url":null,"abstract":"We propose a new method for the fast computation of light maps using a many-light global-illumination solution. A complete scene can be light mapped on the order of seconds to minutes, allowing fast and consistent previews for editing or even generation at loading time. In our method, virtual point lights are clustered into a set of virtual polygon lights, which represent a compact description of the illumination in the scene. The actual light-map generation is performed directly on the GPU. Our approach degrades gracefully, avoiding objectionable artifacts even for very short computation times.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"70 1","pages":"87-94"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82563231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Physical simulation of an embedded surface mesh involving deformation and fracture 涉及变形和断裂的嵌入式表面网格的物理模拟
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448237
B. Clack, J. Keyser
Physically simulating non-rigid virtual objects which can deform or break apart within their environments is now common in state-of-the-art virtual simulations such as video games or surgery simulations. Real-time performance requires a physical model which provides an approximation to the true solution for fast computations but at the same time conveys enough believability of the simulation to the user. By embedding a complex surface mesh within simpler physical geometry, the mesh complexity can be separated from the algorithmic complexity of the physical simulation. Embedding methods have been successful in production quality products (e.g. [Parker and O'Brien 2009]). In the presence of fracture it is still unclear how to derive the graphical representation of a solid object defined only as a surface mesh with no volume information.
物理模拟可以在其环境中变形或破裂的非刚性虚拟对象现在在最先进的虚拟模拟(如视频游戏或手术模拟)中很常见。实时性能需要一个物理模型,它为快速计算提供近似的真实解决方案,同时向用户传达足够的仿真可信度。通过在简单的物理几何中嵌入复杂的表面网格,可以将网格复杂性与物理模拟的算法复杂性分离开来。嵌入方法在生产高质量产品方面取得了成功(例如[Parker和O'Brien 2009])。在存在断裂的情况下,如何导出仅定义为没有体积信息的表面网格的固体物体的图形表示仍然不清楚。
{"title":"Physical simulation of an embedded surface mesh involving deformation and fracture","authors":"B. Clack, J. Keyser","doi":"10.1145/2448196.2448237","DOIUrl":"https://doi.org/10.1145/2448196.2448237","url":null,"abstract":"Physically simulating non-rigid virtual objects which can deform or break apart within their environments is now common in state-of-the-art virtual simulations such as video games or surgery simulations. Real-time performance requires a physical model which provides an approximation to the true solution for fast computations but at the same time conveys enough believability of the simulation to the user. By embedding a complex surface mesh within simpler physical geometry, the mesh complexity can be separated from the algorithmic complexity of the physical simulation. Embedding methods have been successful in production quality products (e.g. [Parker and O'Brien 2009]). In the presence of fracture it is still unclear how to derive the graphical representation of a solid object defined only as a surface mesh with no volume information.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"69 1","pages":"189"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83608242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Texture brush: an interactive surface texturing interface 纹理刷:一个交互式的表面纹理界面
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448221
Qian Sun, Long Zhang, Minqi Zhang, Xiang Ying, Shiqing Xin, Jiazhi Xia, Ying He
This paper presents Texture Brush, an interactive interface for texturing 3D surfaces. We extend the conventional exponential map to a more general setting, in which the generator can be an arbitrary curve. Based on our extended exponential map, we develop a local parameterization method which naturally supports anisotropic texture mapping. With Texture Brush, the user can easily specify such local parameterization with a free-form stroke on the surface. We also propose a set of intuitive operations which are mainly based on 3D painting metaphor, including texture painting, texture cloning, texture animation design, and texture editing. Compared to the existing surface texturing techniques, our method enables a smoother and more natural work flow so that the user can focus on the design task itself without switching back and forth among different tools or stages. The encouraging experimental results and positive evaluation by artists demonstrate the efficacy of our Texture Brush for interactive texture mapping.
本文介绍了纹理刷,一个用于纹理三维表面的交互界面。我们将传统的指数映射扩展到更一般的设置,其中生成器可以是任意曲线。基于我们的扩展指数映射,我们开发了一种局部参数化方法,自然支持各向异性纹理映射。使用纹理刷,用户可以很容易地在表面上用自由形式的笔画指定这种局部参数化。我们还提出了一套主要基于三维绘画隐喻的直观操作,包括纹理绘制、纹理克隆、纹理动画设计和纹理编辑。与现有的表面纹理技术相比,我们的方法使工作流程更流畅,更自然,使用户可以专注于设计任务本身,而无需在不同的工具或阶段之间来回切换。令人鼓舞的实验结果和艺术家的积极评价证明了我们的纹理刷在交互式纹理映射中的有效性。
{"title":"Texture brush: an interactive surface texturing interface","authors":"Qian Sun, Long Zhang, Minqi Zhang, Xiang Ying, Shiqing Xin, Jiazhi Xia, Ying He","doi":"10.1145/2448196.2448221","DOIUrl":"https://doi.org/10.1145/2448196.2448221","url":null,"abstract":"This paper presents Texture Brush, an interactive interface for texturing 3D surfaces. We extend the conventional exponential map to a more general setting, in which the generator can be an arbitrary curve. Based on our extended exponential map, we develop a local parameterization method which naturally supports anisotropic texture mapping. With Texture Brush, the user can easily specify such local parameterization with a free-form stroke on the surface. We also propose a set of intuitive operations which are mainly based on 3D painting metaphor, including texture painting, texture cloning, texture animation design, and texture editing. Compared to the existing surface texturing techniques, our method enables a smoother and more natural work flow so that the user can focus on the design task itself without switching back and forth among different tools or stages. The encouraging experimental results and positive evaluation by artists demonstrate the efficacy of our Texture Brush for interactive texture mapping.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"1 1","pages":"153-160"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87065030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Fast percentage closer soft shadows using temporal coherence 使用时间相干快速接近软阴影百分比
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448209
M. Schwärzler, Christian Luksch, D. Scherzer, M. Wimmer
We propose a novel way to efficiently calculate soft shadows in real-time applications by overcoming the high computational effort involved with the complex corresponding visibility estimation each frame: We exploit the temporal coherence prevalent in typical scene movement, making the estimation of a new shadow value only necessary whenever regions are newly disoccluded due to camera adjustment, or the shadow situation changes due to object movement. By extending the typical shadow mapping algorithm by an additional light-weight buffer for the tracking of dynamic scene objects, we can robustly and efficiently detect all screen space fragments that need to be updated, including not only the moving objects themselves, but also the soft shadows they cast. By applying this strategy to the popular Percentage Closer Soft Shadow algorithm (PCSS), we double rendering performance in scenes with both static and dynamic objects -- as prevalent in various 3D game levels -- while maintaining the visual quality of the original approach.
我们提出了一种新的方法来有效地计算实时应用中的软阴影,克服了每帧复杂的相应可见性估计所涉及的高计算工作量:我们利用典型场景运动中普遍存在的时间相干性,仅在由于相机调整或由于物体运动而导致区域新解除遮挡时才需要估计新的阴影值。通过对典型的阴影映射算法进行扩展,增加一个用于动态场景对象跟踪的轻量级缓冲区,我们可以鲁棒有效地检测所有需要更新的屏幕空间片段,不仅包括移动对象本身,还包括它们投射的软阴影。通过将此策略应用于流行的Percentage Closer Soft Shadow算法(PCSS),我们在具有静态和动态对象的场景中(在各种3D游戏关卡中普遍存在)的渲染性能翻倍,同时保持原始方法的视觉质量。
{"title":"Fast percentage closer soft shadows using temporal coherence","authors":"M. Schwärzler, Christian Luksch, D. Scherzer, M. Wimmer","doi":"10.1145/2448196.2448209","DOIUrl":"https://doi.org/10.1145/2448196.2448209","url":null,"abstract":"We propose a novel way to efficiently calculate soft shadows in real-time applications by overcoming the high computational effort involved with the complex corresponding visibility estimation each frame: We exploit the temporal coherence prevalent in typical scene movement, making the estimation of a new shadow value only necessary whenever regions are newly disoccluded due to camera adjustment, or the shadow situation changes due to object movement. By extending the typical shadow mapping algorithm by an additional light-weight buffer for the tracking of dynamic scene objects, we can robustly and efficiently detect all screen space fragments that need to be updated, including not only the moving objects themselves, but also the soft shadows they cast. By applying this strategy to the popular Percentage Closer Soft Shadow algorithm (PCSS), we double rendering performance in scenes with both static and dynamic objects -- as prevalent in various 3D game levels -- while maintaining the visual quality of the original approach.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"44 1","pages":"79-86"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89893662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Simple and efficient example-based texture synthesis using tiling and deformation 简单有效的基于实例的纹理合成,使用贴图和变形
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448219
Kan Chen, H. Johan, W. Müller-Wittig
In computer graphics, textures represent the detail appearance of the surface of objects, such as colors and patterns. Example-based texture synthesis is to construct a larger visual pattern from a small example texture image. In this paper, we present a simple and efficient method which can synthesize a large scale texture in real-time based on a given example texture by simply tiling and deforming the example texture. Different from most of the existing techniques, our method does not perform search operation and it can compute texture values at any given points (random access). In addition, our method requires small storage which is only to store one example texture. Our method is suitable for synthesizing irregular and near-stochastic texture. We also propose methods to efficiently synthesize and map 3D solid textures on 3D meshes.
在计算机图形学中,纹理表示物体表面的细节外观,如颜色和图案。基于样例的纹理合成是指从一个小的样例纹理图像中构造一个更大的视觉图案。本文提出了一种简单有效的方法,可以在给定纹理的基础上,通过简单的平铺和变形,实时合成大规模纹理。与大多数现有技术不同的是,我们的方法不进行搜索操作,可以在任意给定的点上计算纹理值(随机访问)。此外,我们的方法需要很小的存储空间,仅存储一个纹理示例。该方法适用于不规则纹理和近随机纹理的合成。我们还提出了在三维网格上有效地合成和映射三维实体纹理的方法。
{"title":"Simple and efficient example-based texture synthesis using tiling and deformation","authors":"Kan Chen, H. Johan, W. Müller-Wittig","doi":"10.1145/2448196.2448219","DOIUrl":"https://doi.org/10.1145/2448196.2448219","url":null,"abstract":"In computer graphics, textures represent the detail appearance of the surface of objects, such as colors and patterns. Example-based texture synthesis is to construct a larger visual pattern from a small example texture image. In this paper, we present a simple and efficient method which can synthesize a large scale texture in real-time based on a given example texture by simply tiling and deforming the example texture. Different from most of the existing techniques, our method does not perform search operation and it can compute texture values at any given points (random access). In addition, our method requires small storage which is only to store one example texture. Our method is suitable for synthesizing irregular and near-stochastic texture. We also propose methods to efficiently synthesize and map 3D solid textures on 3D meshes.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"14 1","pages":"145-152"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75811766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Splatting lines for 3D mesh illustration 飞溅线3D网格插图
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448241
Qian Sun, Long Zhang, Ying He
Line drawings are a popular shape depiction technique due to its capability to express meaningful information by ignoring less important or distracting details. Many computer generated line drawing algorithms have been proposed in the past decade, such as suggestive contours, ridge-valley lines, apparent ridges, photic extremum lines, demarcating curves, Laplacian lines, just name a few.
线条画是一种流行的形状描绘技术,因为它能够通过忽略不太重要或分散注意力的细节来表达有意义的信息。在过去的十年中,已经提出了许多计算机生成的线条绘制算法,例如暗示轮廓,脊谷线,表观脊线,光极值线,标定曲线,拉普拉斯线,仅举几例。
{"title":"Splatting lines for 3D mesh illustration","authors":"Qian Sun, Long Zhang, Ying He","doi":"10.1145/2448196.2448241","DOIUrl":"https://doi.org/10.1145/2448196.2448241","url":null,"abstract":"Line drawings are a popular shape depiction technique due to its capability to express meaningful information by ignoring less important or distracting details. Many computer generated line drawing algorithms have been proposed in the past decade, such as suggestive contours, ridge-valley lines, apparent ridges, photic extremum lines, demarcating curves, Laplacian lines, just name a few.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"10 1","pages":"193"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74310557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Warping virtual space for low-cost haptic feedback 为低成本的触觉反馈扭曲虚拟空间
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448243
Luv Kohli
With the introduction of the Nintendo Wii, Playstation Move, and Microsoft Kinect, gaming and virtual reality technologies have begun to merge. These technologies have enabled low-cost, more-natural interaction with games and virtual environments (VEs). However, the sense of touch is usually missing. Interacting with virtual objects often means holding a hand in the air, which can be tiring if done for long. I present a technique to turn simple objects into haptic surfaces for virtual objects, extending earlier work on Redirected Touching [Kohli et al. 2012].
随着任天堂Wii、Playstation Move和微软Kinect的推出,游戏和虚拟现实技术开始融合。这些技术使人们能够与游戏和虚拟环境(ve)进行低成本、更自然的交互。然而,触觉通常是缺失的。与虚拟物体互动通常意味着把手举在空中,如果长时间这样做会很累。我提出了一种将简单对象转换为虚拟对象的触觉表面的技术,扩展了早期在重定向触摸方面的工作[Kohli et al. 2012]。
{"title":"Warping virtual space for low-cost haptic feedback","authors":"Luv Kohli","doi":"10.1145/2448196.2448243","DOIUrl":"https://doi.org/10.1145/2448196.2448243","url":null,"abstract":"With the introduction of the Nintendo Wii, Playstation Move, and Microsoft Kinect, gaming and virtual reality technologies have begun to merge. These technologies have enabled low-cost, more-natural interaction with games and virtual environments (VEs). However, the sense of touch is usually missing. Interacting with virtual objects often means holding a hand in the air, which can be tiring if done for long. I present a technique to turn simple objects into haptic surfaces for virtual objects, extending earlier work on Redirected Touching [Kohli et al. 2012].","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"21 1","pages":"195"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79369289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Volume-based indirect lighting with irradiance decomposition 基于体的间接照明与辐照度分解
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448242
Ruirui Li, K. Qin
High-quality indirect lighting at interactive speed is a difficult challenge. To fast approximate the indirect illumination, the volume-based rendering techniques were used. The Light Propagation Volume (LPV) [Kaplanyan et al. 2010] method departs scenes into coarse lattices and propagates spherical harmonics represented radiance on them. The LPV is able to render complex and dynamic scenes in real-time. But since the radiance transfer is highly approximated among the lattices, it fails to simulate the indirect lighting between the surfaces in the same lattice. On the other hand, the Voxel-based Global Illumination (VGI) [Kaplanyan et al. 2011] voxelizes the scene into fine voxels. It performs the voxel-based ray marching to find the reflected surface in the near-field. By adopting a 1/4x1/4 sampling, the VGI can render in a speed of 18~28 frame per second. But for complex scenes and real global illumination, it requires huge volume data and a large number of rays which degrades the rendering performance to a speed of 2.2s per frame.
高质量的间接照明在交互速度是一个困难的挑战。为了快速逼近间接照明,采用了基于体的渲染技术。光传播体积(Light Propagation Volume, LPV) [Kaplanyan et al. 2010]方法将场景划分为粗网格,并在其上传播代表辐射的球面谐波。LPV能够实时渲染复杂的动态场景。但是由于在晶格之间的辐射传递是高度近似的,它不能模拟同一晶格中表面之间的间接照明。另一方面,基于体素的全局照明(VGI) [Kaplanyan et al. 2011]将场景体素化为精细体素。它执行基于体素的射线推进来寻找近场的反射面。通过采用1/4x1/4采样,VGI可以以18~28帧/秒的速度进行渲染。但是对于复杂的场景和真实的全局照明,它需要大量的体数据和大量的光线,这会降低渲染性能到每帧2.2s的速度。
{"title":"Volume-based indirect lighting with irradiance decomposition","authors":"Ruirui Li, K. Qin","doi":"10.1145/2448196.2448242","DOIUrl":"https://doi.org/10.1145/2448196.2448242","url":null,"abstract":"High-quality indirect lighting at interactive speed is a difficult challenge. To fast approximate the indirect illumination, the volume-based rendering techniques were used. The Light Propagation Volume (LPV) [Kaplanyan et al. 2010] method departs scenes into coarse lattices and propagates spherical harmonics represented radiance on them. The LPV is able to render complex and dynamic scenes in real-time. But since the radiance transfer is highly approximated among the lattices, it fails to simulate the indirect lighting between the surfaces in the same lattice. On the other hand, the Voxel-based Global Illumination (VGI) [Kaplanyan et al. 2011] voxelizes the scene into fine voxels. It performs the voxel-based ray marching to find the reflected surface in the near-field. By adopting a 1/4x1/4 sampling, the VGI can render in a speed of 18~28 frame per second. But for complex scenes and real global illumination, it requires huge volume data and a large number of rays which degrades the rendering performance to a speed of 2.2s per frame.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"155 1","pages":"194"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87856971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WYSIWYG stereo painting 所见即所得立体绘画
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448223
Yongjin Kim, H. Winnemöller, Seungyong Lee
Despite increasing popularity of stereo capture and display systems, creating stereo artwork remains a challenge. This paper presents a stereo painting system, which enables effective from-scratch creation of high-quality stereo artwork. A key concept of our system is a stereo layer, which is composed of two RGBAd (RGBA + depth) buffers. Stereo layers alleviate the need for fully formed representational 3D geometry required by most existing 3D painting systems, and allow for simple, essential depth specification. RGBAd buffers also provide scalability for complex scenes by minimizing the dependency of stereo painting updates on the scene complexity. For interaction with stereo layers, we present stereo paint and stereo depth brushes, which manipulate the photometric (RGBA) and depth buffers of a stereo layer, respectively. In our system, painting and depth manipulation operations can be performed in arbitrary order with real-time visual feedback, providing a flexible WYSIWYG workflow for stereo painting. Comments from artists and experimental results demonstrate that our system effectively aides in the creation of compelling stereo paintings.
尽管立体捕捉和显示系统日益普及,创造立体艺术作品仍然是一个挑战。本文提出了一种立体绘画系统,可以有效地从头开始创作高质量的立体艺术作品。我们系统的一个关键概念是立体层,它由两个RGBA (RGBA +深度)缓冲区组成。立体图层减轻了大多数现有3D绘画系统所需的完全形成的代表性3D几何图形的需求,并允许简单,基本的深度规范。RGBAd缓冲区还通过最小化立体绘画更新对场景复杂性的依赖,为复杂场景提供可扩展性。为了与立体图层进行交互,我们提出了立体绘画和立体深度刷,它们分别操作立体图层的光度(RGBA)和深度缓冲。在我们的系统中,绘画和深度操作操作可以按任意顺序执行,并具有实时视觉反馈,为立体绘画提供了灵活的所见即所得工作流。艺术家的评论和实验结果表明,我们的系统有效地帮助创造引人注目的立体绘画。
{"title":"WYSIWYG stereo painting","authors":"Yongjin Kim, H. Winnemöller, Seungyong Lee","doi":"10.1145/2448196.2448223","DOIUrl":"https://doi.org/10.1145/2448196.2448223","url":null,"abstract":"Despite increasing popularity of stereo capture and display systems, creating stereo artwork remains a challenge. This paper presents a stereo painting system, which enables effective from-scratch creation of high-quality stereo artwork. A key concept of our system is a stereo layer, which is composed of two RGBAd (RGBA + depth) buffers. Stereo layers alleviate the need for fully formed representational 3D geometry required by most existing 3D painting systems, and allow for simple, essential depth specification. RGBAd buffers also provide scalability for complex scenes by minimizing the dependency of stereo painting updates on the scene complexity. For interaction with stereo layers, we present stereo paint and stereo depth brushes, which manipulate the photometric (RGBA) and depth buffers of a stereo layer, respectively. In our system, painting and depth manipulation operations can be performed in arbitrary order with real-time visual feedback, providing a flexible WYSIWYG workflow for stereo painting. Comments from artists and experimental results demonstrate that our system effectively aides in the creation of compelling stereo paintings.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"10 1","pages":"169-176"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84093708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
From 3D to reality: projector-based sculpture assistance system 从3D到现实:基于投影仪的雕塑辅助系统
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448234
Fu-Che Wu
Art is a creative process. Can it possibly be supported by a computational tool? Flagg et al. [Flagg et al. 2006] proposed that capture and access technology can provide a key form of computational support for the creative process. Following the idea, a projector-based sculpture guiding system was constructed. The system can scan 3D structures, compare the difference and display the information on the physical surface. The system consists of a projector and a camera. The camera is the Point Grey Chameleon USB camera. The resolution of the image is 1280x960. The projector is the NEC M300x which a resolution of 1024x768. To keep a fixed relationship between the projector and the camera, the camera is mounted on the projector.
艺术是一个创造的过程。它能被计算工具支持吗?Flagg et al. [Flagg et al. 2006]提出,捕获和访问技术可以为创造性过程提供关键形式的计算支持。按照这个思路,我们构建了一个基于投影仪的雕塑导向系统。该系统可以对三维结构进行扫描,比较差异,并在物理表面上显示信息。该系统由一台投影仪和一台摄像机组成。相机是点灰变色龙USB相机。图像分辨率为1280x960。投影仪是NEC M300x,分辨率为1024x768。为了保证投影仪和摄像机之间的固定关系,摄像机安装在投影仪上。
{"title":"From 3D to reality: projector-based sculpture assistance system","authors":"Fu-Che Wu","doi":"10.1145/2448196.2448234","DOIUrl":"https://doi.org/10.1145/2448196.2448234","url":null,"abstract":"Art is a creative process. Can it possibly be supported by a computational tool? Flagg et al. [Flagg et al. 2006] proposed that capture and access technology can provide a key form of computational support for the creative process. Following the idea, a projector-based sculpture guiding system was constructed. The system can scan 3D structures, compare the difference and display the information on the physical surface. The system consists of a projector and a camera. The camera is the Point Grey Chameleon USB camera. The resolution of the image is 1280x960. The projector is the NEC M300x which a resolution of 1024x768. To keep a fixed relationship between the projector and the camera, the camera is mounted on the projector.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"75 1","pages":"186"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86327157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1