Computational time of collision detection can be exceeded when there are too many checking for non-colliding primitives on some particular areas of high-resolution deformable objects. The problem is usually solved with best-fit bounding volume hierarchies (BVHs) which require much more memory and time for updating the bounding volumes when the objects deform. Hence, a particle-based collision detection method is enhanced to reduce the checking for non-colliding primitives by adding movable particles on the object vertices corresponding to each particular area. The distance of corresponding particles are computed for selecting the closest vertices between each pair of objects. The experimental results show that the proposed method has less colliding checking time than using BVHs when using with the deformable objects. Moreover, the proposed primitive-checking method can be parallel processed on GPU increasing speed performance while accuracy is still preserved when the results are compared to the previous BVH method.
当对高分辨率可变形物体的某些特定区域进行过多的非碰撞原语检查时,会导致碰撞检测的计算时间过多。该问题通常通过最佳拟合边界卷层次(best fit bounding volume hierarchies, BVHs)来解决,当物体变形时,BVHs需要更多的内存和时间来更新边界卷。因此,增强了基于粒子的碰撞检测方法,通过在每个特定区域对应的物体顶点上添加可移动粒子来减少对非碰撞原语的检查。计算相应粒子的距离,选择每对物体之间最接近的顶点。实验结果表明,与BVHs相比,该方法在处理可变形物体时具有更短的碰撞检测时间。此外,该方法可以在GPU上并行处理,提高了速度性能,同时与之前的BVH方法进行比较,结果保持了精度。
{"title":"Collision detection for high-resolution deformable object using particle-based approach","authors":"Thiti Rungcharoenpaisal, P. Kanongchaiyos","doi":"10.1145/1666778.1666801","DOIUrl":"https://doi.org/10.1145/1666778.1666801","url":null,"abstract":"Computational time of collision detection can be exceeded when there are too many checking for non-colliding primitives on some particular areas of high-resolution deformable objects. The problem is usually solved with best-fit bounding volume hierarchies (BVHs) which require much more memory and time for updating the bounding volumes when the objects deform. Hence, a particle-based collision detection method is enhanced to reduce the checking for non-colliding primitives by adding movable particles on the object vertices corresponding to each particular area. The distance of corresponding particles are computed for selecting the closest vertices between each pair of objects. The experimental results show that the proposed method has less colliding checking time than using BVHs when using with the deformable objects. Moreover, the proposed primitive-checking method can be parallel processed on GPU increasing speed performance while accuracy is still preserved when the results are compared to the previous BVH method.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133478853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose in this paper an interactive sketch-based system for simulating oriental brush strokes on complex shapes. We introduce a contour-driven approach where the user inputs contours to represent complex shapes, the system estimates automatically the optimal trajectory of the brush, and then renders them into oriental ink painting. Unlike previous work where the brush trajectory is explicitly specified as input, we automatically estimate this trajectory given the outline of the shape to paint. Existing methods can be classified into: (1) methods that explicitly model a virtual 3D brush and mimic its effect on a paper [Wang and Wang 2007], and (2) methods that simulate the rendering effect on a 2D canvas without an explicit 3D brush model [Okabe et al. 2007]. Our approach falls into the second category. Figure 1 shows four results generated by our algorithm.
本文提出了一种基于草图的交互式系统,用于模拟复杂形状上的东方笔触。我们引入了一种轮廓驱动的方法,用户输入轮廓来表示复杂的形状,系统自动估计笔刷的最佳轨迹,然后将它们渲染成东方水墨画。与之前的工作不同,在之前的工作中,笔刷轨迹被明确指定为输入,我们根据要绘制的形状的轮廓自动估计这个轨迹。现有的方法可以分为:(1)显式建模虚拟3D画笔并模拟其在纸张上的效果的方法[Wang and Wang 2007],以及(2)在没有显式3D画笔模型的情况下在2D画布上模拟渲染效果的方法[Okabe et al. 2007]。我们的方法属于第二类。图1显示了我们的算法生成的四个结果。
{"title":"Contour-driven brush stroke synthesis","authors":"Xie Ning, Hamid Laga, S. Saito, M. Nakajima","doi":"10.1145/1667146.1667154","DOIUrl":"https://doi.org/10.1145/1667146.1667154","url":null,"abstract":"We propose in this paper an interactive sketch-based system for simulating oriental brush strokes on complex shapes. We introduce a contour-driven approach where the user inputs contours to represent complex shapes, the system estimates automatically the optimal trajectory of the brush, and then renders them into oriental ink painting. Unlike previous work where the brush trajectory is explicitly specified as input, we automatically estimate this trajectory given the outline of the shape to paint. Existing methods can be classified into: (1) methods that explicitly model a virtual 3D brush and mimic its effect on a paper [Wang and Wang 2007], and (2) methods that simulate the rendering effect on a 2D canvas without an explicit 3D brush model [Okabe et al. 2007]. Our approach falls into the second category. Figure 1 shows four results generated by our algorithm.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115759430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For a feature film 2012, Digital Domain was asked to destroy Los Angeles by an earthquake, the scale of which human kind has never experienced, entirely in computer graphics. The earthquake levels the city and causes huge canyons and fissures to form in front of our eyes, revealing underground structures and formations. As part of the task of destroying Los Angeles, we implemented a 2d texture-based 3d volumetric shader for RenderMan to create photorealistic fissure walls.
{"title":"Volumetric texture for fissure in 2012","authors":"H. Duiker, Tadao Mihashi","doi":"10.1145/1666778.1666795","DOIUrl":"https://doi.org/10.1145/1666778.1666795","url":null,"abstract":"For a feature film 2012, Digital Domain was asked to destroy Los Angeles by an earthquake, the scale of which human kind has never experienced, entirely in computer graphics. The earthquake levels the city and causes huge canyons and fissures to form in front of our eyes, revealing underground structures and formations. As part of the task of destroying Los Angeles, we implemented a 2d texture-based 3d volumetric shader for RenderMan to create photorealistic fissure walls.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115801961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In Interaction Bar, each wine cup represents a different character and emotion. Simulated scenes and the interaction surface react to users with unique visuals in each situation. Just like a real barroom crowd, these interactions can build bridges of friendship and encourage conversations, even among people who have never met.
{"title":"Interaction bar","authors":"Chia-Hao Yang, Bo-Fan Jheng","doi":"10.1145/1665137.1665193","DOIUrl":"https://doi.org/10.1145/1665137.1665193","url":null,"abstract":"In Interaction Bar, each wine cup represents a different character and emotion. Simulated scenes and the interaction surface react to users with unique visuals in each situation. Just like a real barroom crowd, these interactions can build bridges of friendship and encourage conversations, even among people who have never met.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125139055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the major technical challenges in the animated film Astroboy was the creation of believable crowds of town citizens in the battle arena scene. In order to speed up the performance of converting citizen behavior into RIB format, the pre-baked RIB method proposed in Tutorial on Procedural Primitives [Hery and Sutton 2001] was applied. By using motion-editing techniques, our crowd characters are able to interact with the environment efficiently. Making use of techniques of Procedural RIB Generation, lighting artists were able to apply secondary masks on crowd characters, and the new motion which is generated on-the-fly during simulation can be rendered efficiently.
动画电影《阿童木》的主要技术挑战之一是在战场场景中创造可信的城镇居民人群。为了加快将公民行为转换为RIB格式的性能,我们采用了Tutorial on Procedural Primitives [Hery and Sutton 2001]中提出的预烤RIB方法。通过使用动作编辑技术,我们的人群角色能够有效地与环境互动。利用程序RIB生成技术,灯光艺术家能够在人群角色上应用二次蒙版,并且在模拟过程中生成的新运动可以有效地渲染。
{"title":"Crowd simulation in Astroboy","authors":"E. Tse, Justin Lo","doi":"10.1145/1667146.1667202","DOIUrl":"https://doi.org/10.1145/1667146.1667202","url":null,"abstract":"One of the major technical challenges in the animated film Astroboy was the creation of believable crowds of town citizens in the battle arena scene. In order to speed up the performance of converting citizen behavior into RIB format, the pre-baked RIB method proposed in Tutorial on Procedural Primitives [Hery and Sutton 2001] was applied. By using motion-editing techniques, our crowd characters are able to interact with the environment efficiently. Making use of techniques of Procedural RIB Generation, lighting artists were able to apply secondary masks on crowd characters, and the new motion which is generated on-the-fly during simulation can be rendered efficiently.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131730918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Mill Los Angeles team of 3D artists relished the opportunity to work on the latest commercial for Swedish pension company AMF. Filip Engstrom directed the spot, which features a host of fully CG, photo-real insects. The star of the ad is a caterpillar who becomes forlorn until he transforms into a butterfly.
{"title":"AMF caterpillar","authors":"Filip Engstrom","doi":"10.1145/1665208.1665210","DOIUrl":"https://doi.org/10.1145/1665208.1665210","url":null,"abstract":"The Mill Los Angeles team of 3D artists relished the opportunity to work on the latest commercial for Swedish pension company AMF. Filip Engstrom directed the spot, which features a host of fully CG, photo-real insects. The star of the ad is a caterpillar who becomes forlorn until he transforms into a butterfly.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132564481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This film describes an attempt to use a high-definition digital cinema system to produce content for a next-generation image system. Paintings by Ito Jakuchu were used as prototypes to produce an immersive virtual environment that allows people to enter the paintings. The result reveals possibilities for new collaborative studies among various fields such as art, psychology, and cognitive science, and a larger-than-life display to analyze and understand cultural properties and art works.
{"title":"Analysis and understanding of paintings by Ito Jakuchu","authors":"Sangtae Kim","doi":"10.1145/1665137.1665155","DOIUrl":"https://doi.org/10.1145/1665137.1665155","url":null,"abstract":"This film describes an attempt to use a high-definition digital cinema system to produce content for a next-generation image system. Paintings by Ito Jakuchu were used as prototypes to produce an immersive virtual environment that allows people to enter the paintings. The result reveals possibilities for new collaborative studies among various fields such as art, psychology, and cognitive science, and a larger-than-life display to analyze and understand cultural properties and art works.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122514101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern GPUs provide gradually increasing programmability on vertex shader, geometry shader and fragment shader in the past decade. However, many classical problems such as order-independent transparency (OIT), occlusion culling have not yet been efficiently solved using the traditional graphics pipeline. The main reason is that the behavior of the current stage of the pipeline is hard to be determined due to the unpredictable future data. Since the rasterization and blending stage are still largely fixed functions on chip, previous improvements on these problems always require hardware modifications thus remain on the theoretical level. In this paper we propose CUDA Renderer, a fully programmable graphics pipeline using compute unified device architecture (CUDA) [NVIDIA 2008] which can completely run on current graphics hardware. Our experimental results have demonstrated significant speedup to traditional graphics pipeline especially on OIT. We believe many other problems can also benefit from this flexible architecture.
{"title":"CUDA renderer: a programmable graphics pipeline","authors":"Fang Liu, Meng-Cheng Huang, Xuehui Liu, E. Wu","doi":"10.1145/1667146.1667189","DOIUrl":"https://doi.org/10.1145/1667146.1667189","url":null,"abstract":"Modern GPUs provide gradually increasing programmability on vertex shader, geometry shader and fragment shader in the past decade. However, many classical problems such as order-independent transparency (OIT), occlusion culling have not yet been efficiently solved using the traditional graphics pipeline. The main reason is that the behavior of the current stage of the pipeline is hard to be determined due to the unpredictable future data. Since the rasterization and blending stage are still largely fixed functions on chip, previous improvements on these problems always require hardware modifications thus remain on the theoretical level. In this paper we propose CUDA Renderer, a fully programmable graphics pipeline using compute unified device architecture (CUDA) [NVIDIA 2008] which can completely run on current graphics hardware. Our experimental results have demonstrated significant speedup to traditional graphics pipeline especially on OIT. We believe many other problems can also benefit from this flexible architecture.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117144774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Skeleton-Subspace Deformation (SSD), which is the most popular method for articulated character animation, often causes some artifacts. Animators have to edit mesh each time, which is seriously tedious and time-consuming. So example based skinning has been proposed. It employs edited mesh as target poses and generates plausible animation efficiently. In this technique, character mesh should be deformed to accurately fit target poses. Mohr et al. [2003] introduced additional joints. They expect animators to embed skeleton precisely.
{"title":"Example based skinning with progressively optimized support joints","authors":"Kentaro Yamanaka, Akane Yano, S. Morishima","doi":"10.1145/1666778.1666833","DOIUrl":"https://doi.org/10.1145/1666778.1666833","url":null,"abstract":"Skeleton-Subspace Deformation (SSD), which is the most popular method for articulated character animation, often causes some artifacts. Animators have to edit mesh each time, which is seriously tedious and time-consuming. So example based skinning has been proposed. It employs edited mesh as target poses and generates plausible animation efficiently. In this technique, character mesh should be deformed to accurately fit target poses. Mohr et al. [2003] introduced additional joints. They expect animators to embed skeleton precisely.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115425563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In 2008, Mantiuk et al proposed a display adaptive tone-mapping operator [Mantiuk et al. 2008]. It adapted Daly's visible differences predictor [Daly 1993] and Wilson's transducer function [Wilson 1980; Wilson and Gelb 1984] to measure the response of contrast in human visual system. The transducer function is expressed as: where ΦQ(C) represents the response of a contrast stimulus. C is the physical contrast of a stimulus, and S is the sensitivity of this type of contrast stimuli with certain frequency, orientation, background luminance values, etc. Q is a empirical parameter that falls in a wide range of [2.0, 6.0] across different experiments. It suggests the value of Q is adaptive to experimental conditions. A simple adaption of Equ. 1 with fixed Q may bring unexpected errors.
2008年,Mantiuk等人提出了一种显示自适应音调映射算子[Mantiuk et al. 2008]。它采用了Daly的可见差异预测器[Daly 1993]和Wilson的换能器函数[Wilson 1980;Wilson and Gelb 1984]来测量人类视觉系统的对比度反应。换能器函数表示为:其中ΦQ(C)表示对比刺激的响应。C为刺激的物理对比度,S为该类对比刺激在一定频率、方向、背景亮度值等条件下的灵敏度。Q是一个经验参数,在不同的实验中落在[2.0,6.0]的大范围内。说明Q值对实验条件有一定的适应性。用固定的Q对公式1进行简单的改编可能会带来意想不到的错误。
{"title":"A contrast perception matching based HDR tone-mapping operator","authors":"Zhongkang Lu, S. Rahardja","doi":"10.1145/1666778.1666823","DOIUrl":"https://doi.org/10.1145/1666778.1666823","url":null,"abstract":"In 2008, Mantiuk et al proposed a display adaptive tone-mapping operator [Mantiuk et al. 2008]. It adapted Daly's visible differences predictor [Daly 1993] and Wilson's transducer function [Wilson 1980; Wilson and Gelb 1984] to measure the response of contrast in human visual system. The transducer function is expressed as:\u0000 where ΦQ(C) represents the response of a contrast stimulus. C is the physical contrast of a stimulus, and S is the sensitivity of this type of contrast stimuli with certain frequency, orientation, background luminance values, etc. Q is a empirical parameter that falls in a wide range of [2.0, 6.0] across different experiments. It suggests the value of Q is adaptive to experimental conditions. A simple adaption of Equ. 1 with fixed Q may bring unexpected errors.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128300626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}