We present a robust framework for tracking skeleton joints in real-time by using a single time-of-flight depth sensor. The framework is able to remove the background noise inherent in time-of-flight cameras, detect multiple people, and track up to 30 joints of free motion for each person. The approach has several advantages over traditional motion capture, as it is a cheap alternative to magnetic and optical systems, and requires no markers whatsoever. Unlike markerless systems based on RGB cameras [Duetscher et al. 2000; Kehl and Van Gool 2006], our framework yields dependable results at an interactive rate using a single camera.
我们提出了一个鲁棒框架,通过使用单个飞行时间深度传感器实时跟踪骨骼关节。该框架能够消除飞行时间相机固有的背景噪声,检测多人,并跟踪每个人多达30个自由运动的关节。与传统的动作捕捉相比,这种方法有几个优点,因为它是磁性和光学系统的廉价替代品,而且不需要任何标记。与基于RGB相机的无标记系统不同[Duetscher et al. 2000;Kehl and Van Gool 2006],我们的框架使用单个相机以交互速率产生可靠的结果。
{"title":"Markerless motion capture using a single depth sensor","authors":"Amit Bleiweiss, E. Eilat, Gershom Kutliroff","doi":"10.1145/1667146.1667172","DOIUrl":"https://doi.org/10.1145/1667146.1667172","url":null,"abstract":"We present a robust framework for tracking skeleton joints in real-time by using a single time-of-flight depth sensor. The framework is able to remove the background noise inherent in time-of-flight cameras, detect multiple people, and track up to 30 joints of free motion for each person. The approach has several advantages over traditional motion capture, as it is a cheap alternative to magnetic and optical systems, and requires no markers whatsoever. Unlike markerless systems based on RGB cameras [Duetscher et al. 2000; Kehl and Van Gool 2006], our framework yields dependable results at an interactive rate using a single camera.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128674592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arvid Engström, L. Brunnberg, Josefin Carlsson, O. Juhlin
With Instant Broadcasting System, people can collaboratively produce, edit, and broadcast live video using only mobile phones, a laptop computer, and available mobile networks. In this demonstration, it is used as a VJ system that supports visitor-generated video, flexible content selection, a communication back channel, and real-time loop editing. These features move the system beyond previous webcam-based VJ concepts.
{"title":"Instant broadcasting system: mobile collaborative live video mixing","authors":"Arvid Engström, L. Brunnberg, Josefin Carlsson, O. Juhlin","doi":"10.1145/1665137.1665192","DOIUrl":"https://doi.org/10.1145/1665137.1665192","url":null,"abstract":"With Instant Broadcasting System, people can collaboratively produce, edit, and broadcast live video using only mobile phones, a laptop computer, and available mobile networks. In this demonstration, it is used as a VJ system that supports visitor-generated video, flexible content selection, a communication back channel, and real-time loop editing. These features move the system beyond previous webcam-based VJ concepts.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129400634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jean-Baptiste de la Rivière, E. Orvain, Cédric Kervégant, Nicolas Dittlo
This demonstration combines the cubtile, a new 3D multitouch device that expands tactile input from surface-only interaction to full-volume manipulation, with an augmented-reality-like setup that blends interaction and visualization spaces to put 3D objects between the user's hands.
{"title":"The cubtile: 3D multitouch brings virtual worlds into the user's hands","authors":"Jean-Baptiste de la Rivière, E. Orvain, Cédric Kervégant, Nicolas Dittlo","doi":"10.1145/1665137.1665184","DOIUrl":"https://doi.org/10.1145/1665137.1665184","url":null,"abstract":"This demonstration combines the cubtile, a new 3D multitouch device that expands tactile input from surface-only interaction to full-volume manipulation, with an augmented-reality-like setup that blends interaction and visualization spaces to put 3D objects between the user's hands.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133935324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With large scientific and medical datasets, visualization tools have trouble maintaining a high enough frame-rate to remain interactive. In this paper, we present a novel GPU based system that permits visualization of isosurfaces in large data sets in real time. In particular, we present a novel use of a depth buffer to speed up the operation of rotating around a volume data set. As the user rotates the viewpoint around the 3D volume data, there is much coherence between depth buffers from two sequential renderings. We utilize this coherence in our novel prediction buffer approach, and achieve a marked increase in speed during rotation. The authors of [Klein et al. 2005] used a depth buffer based approach, but they did not alter their traversal based on the prediction value. Our prediction buffer is a 2D array in which we store a single floating point value for each pixel. If a particular pixel pij has some positive depth value dij, this indicates that the ray Rij, which was cast through pij on the previous render, intersected an isosurface at depth dij. The prediction buffer also handles three special cases. When the ray Rij misses the isosurface, but hits the bounding box containing the volume data, we store a negative flag value, dhitBoxMissSurf in pij. When Rij misses the bounding box, we store the value dmissBox. Lastly, when we have no prediction stored in the buffer, we store the value dnoInfo.
对于大型科学和医疗数据集,可视化工具在保持足够高的帧率以保持交互性方面存在困难。在本文中,我们提出了一个新的基于GPU的系统,允许在大数据集中实时可视化等值面。特别是,我们提出了一种新的深度缓冲区的使用,以加快围绕体积数据集旋转的操作。当用户围绕3D体数据旋转视点时,两个连续渲染的深度缓冲之间有很多一致性。我们在新的预测缓冲方法中利用了这种一致性,并在旋转过程中实现了速度的显着提高。[Klein et al. 2005]的作者使用了基于深度缓冲的方法,但他们没有根据预测值改变遍历。我们的预测缓冲区是一个二维数组,我们在其中存储每个像素的单个浮点值。如果一个特定的像素pij有一些正的深度值dij,这表明在之前的渲染中通过pij投射的光线Rij在深度dij处与等值面相交。预测缓冲区还处理三种特殊情况。当射线Rij错过等值面,但击中包含体积数据的边界框时,我们在pij中存储一个负标志值dhitBoxMissSurf。当Rij错过边界框时,我们存储值dmissBox。最后,当我们没有在缓冲区中存储预测时,我们存储值dnoInfo。
{"title":"GPU accelerated isosurface volume rendering using depth-based coherence","authors":"C. Braley, R. Hagan, Yong Cao, D. Gračanin","doi":"10.1145/1666778.1666820","DOIUrl":"https://doi.org/10.1145/1666778.1666820","url":null,"abstract":"With large scientific and medical datasets, visualization tools have trouble maintaining a high enough frame-rate to remain interactive. In this paper, we present a novel GPU based system that permits visualization of isosurfaces in large data sets in real time. In particular, we present a novel use of a depth buffer to speed up the operation of rotating around a volume data set. As the user rotates the viewpoint around the 3D volume data, there is much coherence between depth buffers from two sequential renderings. We utilize this coherence in our novel <i>prediction buffer</i> approach, and achieve a marked increase in speed during rotation. The authors of [Klein et al. 2005] used a depth buffer based approach, but they did not alter their traversal based on the prediction value. Our prediction buffer is a 2D array in which we store a single floating point value for each pixel. If a particular pixel <i>p</i><sub><i>ij</i></sub> has some positive depth value <i>d</i><sub><i>ij</i></sub>, this indicates that the ray <i>R</i><sub><i>ij</i></sub>, which was cast through <i>p</i><sub><i>ij</i></sub> on the previous render, intersected an isosurface at depth <i>d</i><sub><i>ij</i></sub>. The prediction buffer also handles three special cases. When the ray <i>R</i><sub><i>ij</i></sub> misses the isosurface, but hits the bounding box containing the volume data, we store a negative flag value, <i>d</i><sub><i>hitBoxMissSurf</i></sub> in <i>p</i><sub><i>ij</i></sub>. When <i>R</i><sub><i>ij</i></sub> misses the bounding box, we store the value <i>d</i><sub><i>missBox</i></sub>. Lastly, when we have no prediction stored in the buffer, we store the value <i>d</i><sub><i>noInfo</i></sub>.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122223320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the afternoon light of a small rural airport, an airline captain and aviation enthusiast relates the wonders and intricacies of modern jet aviation to an unexpected audience. Set in a large open-air hangar, the story takes place among several great aircraft from aviation history. An experienced professional with many years of experience, the pilot's confident and normally reserved disposition quickly gives way to a display of youthful exuberance. As each part of the engine is described, he becomes more lost in a world of admiration and amazement for such items as "combustionators" and "turbinators." The captain's growing enthusiasm for the subject matter is evident in both his dialog and his impassioned gestures. When the captain finally concludes his description of the highly technical device, his audience is revealed to the viewer in a humorous twist, which leaves the enigmatic pilot at a loss for words.
{"title":"Flight lessons","authors":"N. Helm","doi":"10.1145/1665208.1665244","DOIUrl":"https://doi.org/10.1145/1665208.1665244","url":null,"abstract":"In the afternoon light of a small rural airport, an airline captain and aviation enthusiast relates the wonders and intricacies of modern jet aviation to an unexpected audience. Set in a large open-air hangar, the story takes place among several great aircraft from aviation history. An experienced professional with many years of experience, the pilot's confident and normally reserved disposition quickly gives way to a display of youthful exuberance. As each part of the engine is described, he becomes more lost in a world of admiration and amazement for such items as \"combustionators\" and \"turbinators.\" The captain's growing enthusiasm for the subject matter is evident in both his dialog and his impassioned gestures. When the captain finally concludes his description of the highly technical device, his audience is revealed to the viewer in a humorous twist, which leaves the enigmatic pilot at a loss for words.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124884678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pedro Santos, H. Schmedt, Sebastian Hohmann, A. Stork
From UMPCs to SmartPhones, we witness the emergence of highly integrated mobile computing platforms which boast higher performance than any of their preceeding systems. However, due to the equally growing demand for ever more complex applications such as mixed reality applications for outdoor scenarios, the need for an efficient distribution of resources for the different application tasks remains. In particular pose estimation in outdoor environments still presents a major ongoing challenge.
{"title":"The hybrid outdoor tracking extension for the daylight blocker display","authors":"Pedro Santos, H. Schmedt, Sebastian Hohmann, A. Stork","doi":"10.1145/1666778.1666812","DOIUrl":"https://doi.org/10.1145/1666778.1666812","url":null,"abstract":"From UMPCs to SmartPhones, we witness the emergence of highly integrated mobile computing platforms which boast higher performance than any of their preceeding systems. However, due to the equally growing demand for ever more complex applications such as mixed reality applications for outdoor scenarios, the need for an efficient distribution of resources for the different application tasks remains. In particular pose estimation in outdoor environments still presents a major ongoing challenge.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130209490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose INISIS (Intuitive and Interactive Shading Interface System), which is an efficient shading system based on similar shader retrieval. Using the INISIS, CG artists can easily get superiority shaders from the provided shader database that includes a lot of useful shaders which can be utilized for CG pipeline directly. With the developed system, user can lessen their efforts and laborious time on shading process by slightly tuning some attributes of those shaders even though CG artists have not a sufficient knowledge of shader attributes. INISIS supports several retrieval methods: based on the shader features as color, pattern, and texture, or image based retrieval.
本文提出了基于相似着色检索的高效着色系统INISIS (Intuitive and Interactive Shading Interface System)。使用INISIS, CG艺术家可以很容易地从提供的着色器数据库中获得优越的着色器,其中包括许多有用的着色器,可以直接用于CG管道。有了开发的系统,用户可以通过稍微调整这些着色器的一些属性来减少他们在着色过程中的努力和费力的时间,即使CG艺术家对着色器属性没有足够的了解。INISIS支持几种检索方法:基于着色器的特征,如颜色、图案和纹理,或基于图像的检索。
{"title":"Efficient shading system based on similar shader retrieval","authors":"Hee-Kwon Kim, Jea-Ho Lee, Seung-Woo Nam","doi":"10.1145/1666778.1666799","DOIUrl":"https://doi.org/10.1145/1666778.1666799","url":null,"abstract":"In this paper, we propose INISIS (Intuitive and Interactive Shading Interface System), which is an efficient shading system based on similar shader retrieval. Using the INISIS, CG artists can easily get superiority shaders from the provided shader database that includes a lot of useful shaders which can be utilized for CG pipeline directly. With the developed system, user can lessen their efforts and laborious time on shading process by slightly tuning some attributes of those shaders even though CG artists have not a sufficient knowledge of shader attributes. INISIS supports several retrieval methods: based on the shader features as color, pattern, and texture, or image based retrieval.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127718133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Editing 3D models from excellent commercial software GUIs seems almost equivalent to a sculptor carving a Japanese cypress, or molding clay into a sculpture. There is nothing in this direct operational environment that causes the user to perform complicated mathematical calculations. It is easy to forget that CG creation is built upon numbers. However, the 3D models that we create from this illusionary act of formation are simply numbers in memory space, far removed from real-space models like sculptures or statues. If we cannot touch them with our hands, they do not communicate the accumulation of time (because it takes time to create sculptures).
{"title":"Numeric code","authors":"N. Takahashi","doi":"10.1145/1665208.1665255","DOIUrl":"https://doi.org/10.1145/1665208.1665255","url":null,"abstract":"Editing 3D models from excellent commercial software GUIs seems almost equivalent to a sculptor carving a Japanese cypress, or molding clay into a sculpture. There is nothing in this direct operational environment that causes the user to perform complicated mathematical calculations. It is easy to forget that CG creation is built upon numbers. However, the 3D models that we create from this illusionary act of formation are simply numbers in memory space, far removed from real-space models like sculptures or statues. If we cannot touch them with our hands, they do not communicate the accumulation of time (because it takes time to create sculptures).","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129554757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In global illumination rendering, final gathering (or path tracing) and caustics photon map [Jensen 2001] are frequently used. However, photon mapping generate estimation results with many noises on glossy surfaces as in the left image of Figure 1. We solve this problem using MIS (multiple importance sampling) [Veach 1997]. The advantages of our method are an easy implementation, lower overheads, and good estimation results without delicate parameter tuning.
{"title":"Photon density estimation using multiple importance sampling","authors":"Yusuke Tokuyoshi","doi":"10.1145/1666778.1666815","DOIUrl":"https://doi.org/10.1145/1666778.1666815","url":null,"abstract":"In global illumination rendering, final gathering (or path tracing) and caustics photon map [Jensen 2001] are frequently used. However, photon mapping generate estimation results with many noises on glossy surfaces as in the left image of Figure 1. We solve this problem using MIS (multiple importance sampling) [Veach 1997]. The advantages of our method are an easy implementation, lower overheads, and good estimation results without delicate parameter tuning.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129066464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Instances of Commediation recreates the multiplicity of environments we live in today, along with our spatial impermanence. Five LCD screens show different perspectives of simultaneous fictional events that begin as watercolor paintings and later become animated. This process of remediation (Bolter and Grusin, 2000), and of the adjustment of an older medium to a new one, becomes an analogy to the adjustment of our traditional social behaviors to new spaces of social interaction.
{"title":"Instances of commediation","authors":"Rita Sá, Joana Sá, Eduardo Raon","doi":"10.1145/1665137.1665169","DOIUrl":"https://doi.org/10.1145/1665137.1665169","url":null,"abstract":"Instances of Commediation recreates the multiplicity of environments we live in today, along with our spatial impermanence. Five LCD screens show different perspectives of simultaneous fictional events that begin as watercolor paintings and later become animated. This process of remediation (Bolter and Grusin, 2000), and of the adjustment of an older medium to a new one, becomes an analogy to the adjustment of our traditional social behaviors to new spaces of social interaction.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121136578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}