Consumer RGBD sensors are becoming ubiquitous and can be found in many devices such as laptops (e.g., Intel's RealSense) or tablets (e.g., Google Tango, Structure, etc.). They have become popular in graphics, vision, and HCI communities as they enable numerous applications such as 3D capture, gesture recognition, virtual fitting, etc. Nowadays, common sensors can deliver a stream of color images and depth maps in VGA resolution at 30 fps. While the color image is usually of sufficient quality for visualization, depth information (represented as a point cloud) is usually too sparse and noisy for readable rendering.
{"title":"Augmented dynamic shape for live high quality rendering","authors":"Tony Tung","doi":"10.1145/2787626.2787643","DOIUrl":"https://doi.org/10.1145/2787626.2787643","url":null,"abstract":"Consumer RGBD sensors are becoming ubiquitous and can be found in many devices such as laptops (e.g., Intel's RealSense) or tablets (e.g., Google Tango, Structure, etc.). They have become popular in graphics, vision, and HCI communities as they enable numerous applications such as 3D capture, gesture recognition, virtual fitting, etc. Nowadays, common sensors can deliver a stream of color images and depth maps in VGA resolution at 30 fps. While the color image is usually of sufficient quality for visualization, depth information (represented as a point cloud) is usually too sparse and noisy for readable rendering.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121125591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nazim Haouchine, A. Bilger, Jérémie Dequidt, S. Cotin
The considerable advances in Computer Vision for hand and finger tracking made it possible to have several sorts of interactions in Augmented Reality systems (AR), such as object grasping, object translation or surface deformation [Chun and Höllerer 2013]. However, no method has yet considered interaction than involves topological changes of the augmented model (like mesh cutting).
{"title":"Fracture in augmented reality","authors":"Nazim Haouchine, A. Bilger, Jérémie Dequidt, S. Cotin","doi":"10.1145/2787626.2792636","DOIUrl":"https://doi.org/10.1145/2787626.2792636","url":null,"abstract":"The considerable advances in Computer Vision for hand and finger tracking made it possible to have several sorts of interactions in Augmented Reality systems (AR), such as object grasping, object translation or surface deformation [Chun and Höllerer 2013]. However, no method has yet considered interaction than involves topological changes of the augmented model (like mesh cutting).","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115017306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Kang, Chi Xu, Shujin Lin, Songhua Xu, Xiaonan Luo, Qiang Chen
Sketching is a natural human practice. With the popularity of multi-touch tablets and styluses, sketching has become a more popular means of human-computer interaction. However, accurately recognizing sketches is rather challenging, especially when they are drawn by non-professionals. Therefore, automatic sketch understanding has attracted much research attention. To tackle the problem, we propose to segment sketch drawings before analyzing the semantic meanings of sketches for the purpose of developing a sketch-based 3D model retrieval system.
{"title":"Component segmentation of sketches used in 3D model retrieval","authors":"Yang Kang, Chi Xu, Shujin Lin, Songhua Xu, Xiaonan Luo, Qiang Chen","doi":"10.1145/2787626.2792655","DOIUrl":"https://doi.org/10.1145/2787626.2792655","url":null,"abstract":"Sketching is a natural human practice. With the popularity of multi-touch tablets and styluses, sketching has become a more popular means of human-computer interaction. However, accurately recognizing sketches is rather challenging, especially when they are drawn by non-professionals. Therefore, automatic sketch understanding has attracted much research attention. To tackle the problem, we propose to segment sketch drawings before analyzing the semantic meanings of sketches for the purpose of developing a sketch-based 3D model retrieval system.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116542473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a drone-based drawing system where a user's sketch on a desk is transformed across scale and time, and transferred onto a larger canvas at a distance in real-time. Various spatio-temporal transformations like scaling, mirroring, time stretching, recording and playing back over time, and simultaneously drawing at multiple locations allow for creating various artistic effects. The unrestricted motion of the drone promises scalability and a huge potential as an artistic medium.
{"title":"Z-drawing: a flying agent system for computer-assisted drawing","authors":"Sang-won Leigh, Harshit Agrawal, P. Maes","doi":"10.1145/2787626.2787652","DOIUrl":"https://doi.org/10.1145/2787626.2787652","url":null,"abstract":"We present a drone-based drawing system where a user's sketch on a desk is transformed across scale and time, and transferred onto a larger canvas at a distance in real-time. Various spatio-temporal transformations like scaling, mirroring, time stretching, recording and playing back over time, and simultaneously drawing at multiple locations allow for creating various artistic effects. The unrestricted motion of the drone promises scalability and a huge potential as an artistic medium.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122355716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stress is physical response that affects everyone in varying degrees. Throughout history, people have developed various practices to help cope with stress. Many of these practices focus on bringing awareness to the body and breath. Studies have shown that mindfulness meditation and paced breathing are effective tools for stress management [Brown, 2005].
{"title":"Mobile haptic system design to evoke relaxation through paced breathing","authors":"A. Bumatay, J. Seo","doi":"10.1145/2787626.2792627","DOIUrl":"https://doi.org/10.1145/2787626.2792627","url":null,"abstract":"Stress is physical response that affects everyone in varying degrees. Throughout history, people have developed various practices to help cope with stress. Many of these practices focus on bringing awareness to the body and breath. Studies have shown that mindfulness meditation and paced breathing are effective tools for stress management [Brown, 2005].","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114283049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One main concern of audiovisual speech research is the intelligibility of audiovisual speech (i.e., talking head). In fact, lip reading is crucial for challenged population as hard of hearing people. For audiovisual synthesis and animation, this suggests that one should pay careful attention to modeling the region of the face that participates actively during speech. Above all, a facial animation system needs extremely good representations of lip motion and deformation in order to achieve realism and effective communication.
{"title":"Dynamic realistic lip animation using a limited number of control points","authors":"Slim Ouni, Guillaume Gris","doi":"10.1145/2787626.2787628","DOIUrl":"https://doi.org/10.1145/2787626.2787628","url":null,"abstract":"One main concern of audiovisual speech research is the intelligibility of audiovisual speech (i.e., talking head). In fact, lip reading is crucial for challenged population as hard of hearing people. For audiovisual synthesis and animation, this suggests that one should pay careful attention to modeling the region of the face that participates actively during speech. Above all, a facial animation system needs extremely good representations of lip motion and deformation in order to achieve realism and effective communication.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129772402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The glory is a colorful atmospheric phenomenon which resembles a small circular rainbow on the front surface of a cloudbank. It is most frequently seen from aircraft when the observer is directly between the sun and the clouds. Glories are also sometimes seen by skydivers looking down through thin cloud layers. They are always centered around the shadow of the observer's head (or camera).
{"title":"Real-time rendering of atmospheric glories","authors":"Ari Rapkin Blenkhorn","doi":"10.1145/2787626.2787632","DOIUrl":"https://doi.org/10.1145/2787626.2787632","url":null,"abstract":"The glory is a colorful atmospheric phenomenon which resembles a small circular rainbow on the front surface of a cloudbank. It is most frequently seen from aircraft when the observer is directly between the sun and the clouds. Glories are also sometimes seen by skydivers looking down through thin cloud layers. They are always centered around the shadow of the observer's head (or camera).","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"504 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128502957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Chiang, Po-Han Lin, Yuan-Hung Chang, M. Ouhyoung
Synthesizing competitive interactions between two avatars in a physics-based simulation remains challenging. Most previous works rely on reusing motion capture data. They also need an offline preprocessing step to either build motion graphs or perform motion analysis. On the other hand, an online motion synthesis algorithm [Hämäläinen et al. 2014] can produce physically plausible motions including balance recovery and dodge projectiles without prior data. They use a kd-tree sequential Monte Carlo sampler to optimize the joint angle trajectories. We extend their approach and propose a new objective function to create two-character animations in a close-range combat. The principles of attack and defense are designed according to fundamental theory of Chinese martial arts. Instead of following a series of fixed Kung Fu forms, our method gives 3D avatars the freedom to explore diverse movements and through pruning can finally evolve an optimal way for fighting.
在基于物理的模拟中,合成两个角色之间的竞争性互动仍然具有挑战性。大多数先前的工作依赖于重用动作捕捉数据。它们还需要离线预处理步骤来构建运动图形或执行运动分析。另一方面,一种在线运动合成算法[Hämäläinen et al. 2014]可以在没有先验数据的情况下产生物理上合理的运动,包括平衡恢复和闪避弹丸。他们使用kd-tree顺序蒙特卡罗采样器来优化关节角度轨迹。我们扩展了他们的方法,并提出了一个新的目标函数来创建近距离战斗中的双角色动画。进攻和防御的原则是根据中国武术的基本理论设计的。我们的方法不是遵循一系列固定的功夫形式,而是让3D化身自由地探索不同的动作,并通过修剪最终进化出最佳的战斗方式。
{"title":"Synthesizing close combat using sequential Monte Carlo","authors":"I. Chiang, Po-Han Lin, Yuan-Hung Chang, M. Ouhyoung","doi":"10.1145/2787626.2787638","DOIUrl":"https://doi.org/10.1145/2787626.2787638","url":null,"abstract":"Synthesizing competitive interactions between two avatars in a physics-based simulation remains challenging. Most previous works rely on reusing motion capture data. They also need an offline preprocessing step to either build motion graphs or perform motion analysis. On the other hand, an online motion synthesis algorithm [Hämäläinen et al. 2014] can produce physically plausible motions including balance recovery and dodge projectiles without prior data. They use a kd-tree sequential Monte Carlo sampler to optimize the joint angle trajectories. We extend their approach and propose a new objective function to create two-character animations in a close-range combat. The principles of attack and defense are designed according to fundamental theory of Chinese martial arts. Instead of following a series of fixed Kung Fu forms, our method gives 3D avatars the freedom to explore diverse movements and through pruning can finally evolve an optimal way for fighting.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129486623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nobuhisa Hanamitsu, Kanata Nakamura, M. Y. Saraiji, K. Minamizawa, S. Tachi
Twech is a mobile platform that enables users to share visuo-tactile experience and search other experiences for tactile data. User can record and share visuo-tactile experiences by using a visuo-tactile recording and displaying attachment for smartphone, allows the user to instantly such as tweet, and re-experience shared data such as visuo-motor coupling. Further, Twech's search engine finds similar other experiences, which were scratched material surfaces, communicated with animals or other experiences, for uploaded tactile data by using search engine is based on deep learning that ware expanded for recognizing tactile materials. Twech provides a sharing and finding haptic experiences and users re-experience uploaded visual-tactile data from cloud server.
{"title":"Twech: a mobile platform to search and share visuo-tactile experiences","authors":"Nobuhisa Hanamitsu, Kanata Nakamura, M. Y. Saraiji, K. Minamizawa, S. Tachi","doi":"10.1145/2787626.2792628","DOIUrl":"https://doi.org/10.1145/2787626.2792628","url":null,"abstract":"Twech is a mobile platform that enables users to share visuo-tactile experience and search other experiences for tactile data. User can record and share visuo-tactile experiences by using a visuo-tactile recording and displaying attachment for smartphone, allows the user to instantly such as tweet, and re-experience shared data such as visuo-motor coupling. Further, Twech's search engine finds similar other experiences, which were scratched material surfaces, communicated with animals or other experiences, for uploaded tactile data by using search engine is based on deep learning that ware expanded for recognizing tactile materials. Twech provides a sharing and finding haptic experiences and users re-experience uploaded visual-tactile data from cloud server.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133013573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maintaining a high steady frame rate is an important aspect in interactive real-time graphics. It is mainly influenced by the number of objects and the number of lights to be processed for a 3d scene. The upper-bound effort for rendering a scene is then defined by the number of objects times the number of lights, i. e. O(NO · NL). Deferred shading reduces this upper bound to the number of objects plus the number of lights, i. e. O(NO + NL), by separating the rendering process into two phases: geometry processing and lighting evaluation. The geometry processing rasterizes all objects but only retains visible fragments in a G-Buffer for the current viewpoint. The lighting evaluation then only needs to process those surviving fragments to compute the final image (for the current viewpoint). Unfortunately, this approach not only trades computational effort for memory but also requires the re-creation of the G-Buffer every time the viewpoint changes. Additionally, transparent objects cannot be encoded into a G-Buffer and must be separately processed. Post-rendering 3d warping [Mark et al. 1997] is one particular technique that allows to create images from G-Buffer information for new viewpoints. However, this only works with sufficient fragment information. Objects not encoded in the G-Buffer, because they were not visible from the original viewpoint, will create visual artifacts at discontinuities between objects. We propose fragment-history volumes (FHV) to create novel viewpoints from a discrete representation of the entire scene using current graphics hardware and present an initial performance comparison.
保持高稳定的帧率是交互式实时图形的一个重要方面。它主要受3d场景中要处理的物体数量和灯光数量的影响。然后,渲染场景的上限努力由物体数量乘以灯光数量来定义,即0 (NO·NL)。延迟着色通过将渲染过程分为两个阶段:几何处理和照明评估,将这个上限减少到物体数量加上灯光数量,即0 (NO + NL)。几何处理栅格化所有对象,但只保留当前视点的G-Buffer中的可见片段。然后照明评估只需要处理那些幸存的碎片来计算最终图像(对于当前的视点)。不幸的是,这种方法不仅需要计算量来换取内存,而且每次视点改变时都需要重新创建G-Buffer。此外,透明对象不能编码到G-Buffer中,必须单独处理。渲染后3d翘曲[Mark et al. 1997]是一种特殊的技术,允许从G-Buffer信息为新的视点创建图像。然而,这只适用于足够的片段信息。未在G-Buffer中编码的对象,因为它们从原始视点不可见,将在对象之间的不连续处创建视觉伪影。我们提出片段历史卷(FHV),利用当前图形硬件从整个场景的离散表示中创建新的视点,并提供初步的性能比较。
{"title":"Reducing geometry-processing overhead for novel viewpoint creation","authors":"Francisco Inácio, J. P. Springer","doi":"10.1145/2787626.2792599","DOIUrl":"https://doi.org/10.1145/2787626.2792599","url":null,"abstract":"Maintaining a high steady frame rate is an important aspect in interactive real-time graphics. It is mainly influenced by the number of objects and the number of lights to be processed for a 3d scene. The upper-bound effort for rendering a scene is then defined by the number of objects times the number of lights, i. e. O(NO · NL). Deferred shading reduces this upper bound to the number of objects plus the number of lights, i. e. O(NO + NL), by separating the rendering process into two phases: geometry processing and lighting evaluation. The geometry processing rasterizes all objects but only retains visible fragments in a G-Buffer for the current viewpoint. The lighting evaluation then only needs to process those surviving fragments to compute the final image (for the current viewpoint). Unfortunately, this approach not only trades computational effort for memory but also requires the re-creation of the G-Buffer every time the viewpoint changes. Additionally, transparent objects cannot be encoded into a G-Buffer and must be separately processed. Post-rendering 3d warping [Mark et al. 1997] is one particular technique that allows to create images from G-Buffer information for new viewpoints. However, this only works with sufficient fragment information. Objects not encoded in the G-Buffer, because they were not visible from the original viewpoint, will create visual artifacts at discontinuities between objects. We propose fragment-history volumes (FHV) to create novel viewpoints from a discrete representation of the entire scene using current graphics hardware and present an initial performance comparison.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133117488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}