Computational time of collision detection can be exceeded when there are too many checking for non-colliding primitives on some particular areas of high-resolution deformable objects. The problem is usually solved with best-fit bounding volume hierarchies (BVHs) which require much more memory and time for updating the bounding volumes when the objects deform. Hence, a particle-based collision detection method is enhanced to reduce the checking for non-colliding primitives by adding movable particles on the object vertices corresponding to each particular area. The distance of corresponding particles are computed for selecting the closest vertices between each pair of objects. The experimental results show that the proposed method has less colliding checking time than using BVHs when using with the deformable objects. Moreover, the proposed primitive-checking method can be parallel processed on GPU increasing speed performance while accuracy is still preserved when the results are compared to the previous BVH method.
当对高分辨率可变形物体的某些特定区域进行过多的非碰撞原语检查时,会导致碰撞检测的计算时间过多。该问题通常通过最佳拟合边界卷层次(best fit bounding volume hierarchies, BVHs)来解决,当物体变形时,BVHs需要更多的内存和时间来更新边界卷。因此,增强了基于粒子的碰撞检测方法,通过在每个特定区域对应的物体顶点上添加可移动粒子来减少对非碰撞原语的检查。计算相应粒子的距离,选择每对物体之间最接近的顶点。实验结果表明,与BVHs相比,该方法在处理可变形物体时具有更短的碰撞检测时间。此外,该方法可以在GPU上并行处理,提高了速度性能,同时与之前的BVH方法进行比较,结果保持了精度。
{"title":"Collision detection for high-resolution deformable object using particle-based approach","authors":"Thiti Rungcharoenpaisal, P. Kanongchaiyos","doi":"10.1145/1666778.1666801","DOIUrl":"https://doi.org/10.1145/1666778.1666801","url":null,"abstract":"Computational time of collision detection can be exceeded when there are too many checking for non-colliding primitives on some particular areas of high-resolution deformable objects. The problem is usually solved with best-fit bounding volume hierarchies (BVHs) which require much more memory and time for updating the bounding volumes when the objects deform. Hence, a particle-based collision detection method is enhanced to reduce the checking for non-colliding primitives by adding movable particles on the object vertices corresponding to each particular area. The distance of corresponding particles are computed for selecting the closest vertices between each pair of objects. The experimental results show that the proposed method has less colliding checking time than using BVHs when using with the deformable objects. Moreover, the proposed primitive-checking method can be parallel processed on GPU increasing speed performance while accuracy is still preserved when the results are compared to the previous BVH method.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133478853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Japanese spacedraft Kaguya (Selene) was launched on 14 September 2007 by the Japan Aerospace Exploration Agency. Its objectives are "to obtain scientific data of the lunar origin and evolution and to develop the technology for future lunar exploration."
{"title":"Entire topography of lunar surface","authors":"H. Nakayama","doi":"10.1145/1665208.1665243","DOIUrl":"https://doi.org/10.1145/1665208.1665243","url":null,"abstract":"The Japanese spacedraft Kaguya (Selene) was launched on 14 September 2007 by the Japan Aerospace Exploration Agency. Its objectives are \"to obtain scientific data of the lunar origin and evolution and to develop the technology for future lunar exploration.\"","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114338338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern GPUs provide gradually increasing programmability on vertex shader, geometry shader and fragment shader in the past decade. However, many classical problems such as order-independent transparency (OIT), occlusion culling have not yet been efficiently solved using the traditional graphics pipeline. The main reason is that the behavior of the current stage of the pipeline is hard to be determined due to the unpredictable future data. Since the rasterization and blending stage are still largely fixed functions on chip, previous improvements on these problems always require hardware modifications thus remain on the theoretical level. In this paper we propose CUDA Renderer, a fully programmable graphics pipeline using compute unified device architecture (CUDA) [NVIDIA 2008] which can completely run on current graphics hardware. Our experimental results have demonstrated significant speedup to traditional graphics pipeline especially on OIT. We believe many other problems can also benefit from this flexible architecture.
{"title":"CUDA renderer: a programmable graphics pipeline","authors":"Fang Liu, Meng-Cheng Huang, Xuehui Liu, E. Wu","doi":"10.1145/1667146.1667189","DOIUrl":"https://doi.org/10.1145/1667146.1667189","url":null,"abstract":"Modern GPUs provide gradually increasing programmability on vertex shader, geometry shader and fragment shader in the past decade. However, many classical problems such as order-independent transparency (OIT), occlusion culling have not yet been efficiently solved using the traditional graphics pipeline. The main reason is that the behavior of the current stage of the pipeline is hard to be determined due to the unpredictable future data. Since the rasterization and blending stage are still largely fixed functions on chip, previous improvements on these problems always require hardware modifications thus remain on the theoretical level. In this paper we propose CUDA Renderer, a fully programmable graphics pipeline using compute unified device architecture (CUDA) [NVIDIA 2008] which can completely run on current graphics hardware. Our experimental results have demonstrated significant speedup to traditional graphics pipeline especially on OIT. We believe many other problems can also benefit from this flexible architecture.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117144774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose in this paper an interactive sketch-based system for simulating oriental brush strokes on complex shapes. We introduce a contour-driven approach where the user inputs contours to represent complex shapes, the system estimates automatically the optimal trajectory of the brush, and then renders them into oriental ink painting. Unlike previous work where the brush trajectory is explicitly specified as input, we automatically estimate this trajectory given the outline of the shape to paint. Existing methods can be classified into: (1) methods that explicitly model a virtual 3D brush and mimic its effect on a paper [Wang and Wang 2007], and (2) methods that simulate the rendering effect on a 2D canvas without an explicit 3D brush model [Okabe et al. 2007]. Our approach falls into the second category. Figure 1 shows four results generated by our algorithm.
本文提出了一种基于草图的交互式系统,用于模拟复杂形状上的东方笔触。我们引入了一种轮廓驱动的方法,用户输入轮廓来表示复杂的形状,系统自动估计笔刷的最佳轨迹,然后将它们渲染成东方水墨画。与之前的工作不同,在之前的工作中,笔刷轨迹被明确指定为输入,我们根据要绘制的形状的轮廓自动估计这个轨迹。现有的方法可以分为:(1)显式建模虚拟3D画笔并模拟其在纸张上的效果的方法[Wang and Wang 2007],以及(2)在没有显式3D画笔模型的情况下在2D画布上模拟渲染效果的方法[Okabe et al. 2007]。我们的方法属于第二类。图1显示了我们的算法生成的四个结果。
{"title":"Contour-driven brush stroke synthesis","authors":"Xie Ning, Hamid Laga, S. Saito, M. Nakajima","doi":"10.1145/1667146.1667154","DOIUrl":"https://doi.org/10.1145/1667146.1667154","url":null,"abstract":"We propose in this paper an interactive sketch-based system for simulating oriental brush strokes on complex shapes. We introduce a contour-driven approach where the user inputs contours to represent complex shapes, the system estimates automatically the optimal trajectory of the brush, and then renders them into oriental ink painting. Unlike previous work where the brush trajectory is explicitly specified as input, we automatically estimate this trajectory given the outline of the shape to paint. Existing methods can be classified into: (1) methods that explicitly model a virtual 3D brush and mimic its effect on a paper [Wang and Wang 2007], and (2) methods that simulate the rendering effect on a 2D canvas without an explicit 3D brush model [Okabe et al. 2007]. Our approach falls into the second category. Figure 1 shows four results generated by our algorithm.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115759430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For a feature film 2012, Digital Domain was asked to destroy Los Angeles by an earthquake, the scale of which human kind has never experienced, entirely in computer graphics. The earthquake levels the city and causes huge canyons and fissures to form in front of our eyes, revealing underground structures and formations. As part of the task of destroying Los Angeles, we implemented a 2d texture-based 3d volumetric shader for RenderMan to create photorealistic fissure walls.
{"title":"Volumetric texture for fissure in 2012","authors":"H. Duiker, Tadao Mihashi","doi":"10.1145/1666778.1666795","DOIUrl":"https://doi.org/10.1145/1666778.1666795","url":null,"abstract":"For a feature film 2012, Digital Domain was asked to destroy Los Angeles by an earthquake, the scale of which human kind has never experienced, entirely in computer graphics. The earthquake levels the city and causes huge canyons and fissures to form in front of our eyes, revealing underground structures and formations. As part of the task of destroying Los Angeles, we implemented a 2d texture-based 3d volumetric shader for RenderMan to create photorealistic fissure walls.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115801961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Skeleton-Subspace Deformation (SSD), which is the most popular method for articulated character animation, often causes some artifacts. Animators have to edit mesh each time, which is seriously tedious and time-consuming. So example based skinning has been proposed. It employs edited mesh as target poses and generates plausible animation efficiently. In this technique, character mesh should be deformed to accurately fit target poses. Mohr et al. [2003] introduced additional joints. They expect animators to embed skeleton precisely.
{"title":"Example based skinning with progressively optimized support joints","authors":"Kentaro Yamanaka, Akane Yano, S. Morishima","doi":"10.1145/1666778.1666833","DOIUrl":"https://doi.org/10.1145/1666778.1666833","url":null,"abstract":"Skeleton-Subspace Deformation (SSD), which is the most popular method for articulated character animation, often causes some artifacts. Animators have to edit mesh each time, which is seriously tedious and time-consuming. So example based skinning has been proposed. It employs edited mesh as target poses and generates plausible animation efficiently. In this technique, character mesh should be deformed to accurately fit target poses. Mohr et al. [2003] introduced additional joints. They expect animators to embed skeleton precisely.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115425563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In Interaction Bar, each wine cup represents a different character and emotion. Simulated scenes and the interaction surface react to users with unique visuals in each situation. Just like a real barroom crowd, these interactions can build bridges of friendship and encourage conversations, even among people who have never met.
{"title":"Interaction bar","authors":"Chia-Hao Yang, Bo-Fan Jheng","doi":"10.1145/1665137.1665193","DOIUrl":"https://doi.org/10.1145/1665137.1665193","url":null,"abstract":"In Interaction Bar, each wine cup represents a different character and emotion. Simulated scenes and the interaction surface react to users with unique visuals in each situation. Just like a real barroom crowd, these interactions can build bridges of friendship and encourage conversations, even among people who have never met.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125139055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
All Bean Maxwell wants is for the picture on his foyer wall to hang level. With a scrutinizing eye, and an array of tools, he tirelessly pursues this exercise in perfection. But will his dedication to the little details cause him to lose sight of the bigger picture?
{"title":"On the level","authors":"Michael Rutter","doi":"10.1145/1665208.1665257","DOIUrl":"https://doi.org/10.1145/1665208.1665257","url":null,"abstract":"All Bean Maxwell wants is for the picture on his foyer wall to hang level. With a scrutinizing eye, and an array of tools, he tirelessly pursues this exercise in perfection. But will his dedication to the little details cause him to lose sight of the bigger picture?","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127631931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This film describes an attempt to use a high-definition digital cinema system to produce content for a next-generation image system. Paintings by Ito Jakuchu were used as prototypes to produce an immersive virtual environment that allows people to enter the paintings. The result reveals possibilities for new collaborative studies among various fields such as art, psychology, and cognitive science, and a larger-than-life display to analyze and understand cultural properties and art works.
{"title":"Analysis and understanding of paintings by Ito Jakuchu","authors":"Sangtae Kim","doi":"10.1145/1665137.1665155","DOIUrl":"https://doi.org/10.1145/1665137.1665155","url":null,"abstract":"This film describes an attempt to use a high-definition digital cinema system to produce content for a next-generation image system. Paintings by Ito Jakuchu were used as prototypes to produce an immersive virtual environment that allows people to enter the paintings. The result reveals possibilities for new collaborative studies among various fields such as art, psychology, and cognitive science, and a larger-than-life display to analyze and understand cultural properties and art works.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122514101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuichiro Yamaguchi, Takuya Saito, Yosuke Bando, Bing-Yu Chen, T. Nishita
In traditional image composition methods for cutting out a source object from a source image and pasting it onto a target image, users have to segment a foreground object in a target image when they want to partially hide a source object behind it. While recent image editing tools greatly facilitate segmentation operations, it can be tedious to segment each object if users try to place a source object in various positions in a target image before obtaining a satisfying composition. We propose a method which allows users to drag a source object and slip it behind a target object as shown in Fig. 1, so that users can move a source object around without manually segmenting each part of a target image.
{"title":"Interactive image composition through draggable objects","authors":"Yuichiro Yamaguchi, Takuya Saito, Yosuke Bando, Bing-Yu Chen, T. Nishita","doi":"10.1145/1667146.1667186","DOIUrl":"https://doi.org/10.1145/1667146.1667186","url":null,"abstract":"In traditional image composition methods for cutting out a source object from a source image and pasting it onto a target image, users have to segment a foreground object in a target image when they want to partially hide a source object behind it. While recent image editing tools greatly facilitate segmentation operations, it can be tedious to segment each object if users try to place a source object in various positions in a target image before obtaining a satisfying composition. We propose a method which allows users to drag a source object and slip it behind a target object as shown in Fig. 1, so that users can move a source object around without manually segmenting each part of a target image.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"168 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122266570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}