ColorFingers is a WYSIWYG, Location Independent Touch (LIT) based color picking tool aimed to give unique and swift interaction in choosing color on touch based devices. It makes use of touch interface and the touch information of two fingers to select almost 16 million colors. This tool is a model to prove how touch can be interpreted in different ways to achieve performance improvements in HCI. In this paper, we propose ColorFingers which is a color picker and briefly discuss the working of it. We show, how it achieves around 54% reduction in color selection time and 53% improvement in accuracy when compared to existing models. The proposed model emphasizes on Multi-touch, Quick feedback and Location Independency.
{"title":"ColorFingers: improved multi-touch color picker","authors":"A. J. G. Ebbinason, B. R. Kanna","doi":"10.1145/2669024.2669033","DOIUrl":"https://doi.org/10.1145/2669024.2669033","url":null,"abstract":"ColorFingers is a WYSIWYG, Location Independent Touch (LIT) based color picking tool aimed to give unique and swift interaction in choosing color on touch based devices. It makes use of touch interface and the touch information of two fingers to select almost 16 million colors. This tool is a model to prove how touch can be interpreted in different ways to achieve performance improvements in HCI. In this paper, we propose ColorFingers which is a color picker and briefly discuss the working of it. We show, how it achieves around 54% reduction in color selection time and 53% improvement in accuracy when compared to existing models. The proposed model emphasizes on Multi-touch, Quick feedback and Location Independency.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"494 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123563555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Joseph, Brett Achorn, Sean Jenkins, Hank Driskill
The feature film "Big Hero 6" is set in a fictional city with numerous scenes encompassing hundreds of buildings. The objects visible inside the windows, especially during nighttime, play a vital role in portraying the realism of the scene. Unfortunately, it can be expensive to individually model each room in every building. Thus, the production team needed a way to render building interiors with reasonable parallax effects, without adding geometry in an already large scene. This paper describes a novel building interior visualization system using a Virtual Window Shader (Shader) written for a ray-traced global illumination (GI) multi-bounce renderer [Eisenacher et al. 2013]. The Shader efficiently creates an illusion of geometry and light sources inside building windows using only pre-baked textures.
剧情片《超能陆战队》以一个虚构的城市为背景,那里有许多场景,包括数百座建筑。窗户内可见的物体,特别是在夜间,在描绘场景的现实主义方面起着至关重要的作用。不幸的是,为每栋建筑的每个房间单独建模可能会很昂贵。因此,制作团队需要一种方法来渲染具有合理视差效果的建筑内部,而不是在已经很大的场景中添加几何体。本文描述了一种新的建筑内部可视化系统,该系统使用为光线跟踪全局照明(GI)多弹跳渲染器编写的虚拟窗口着色器(Shader) [Eisenacher et al. 2013]。Shader仅使用预烤纹理有效地在建筑窗户内创建几何形状和光源的错觉。
{"title":"Visualizing building interiors using virtual windows","authors":"N. Joseph, Brett Achorn, Sean Jenkins, Hank Driskill","doi":"10.1145/2669024.2669029","DOIUrl":"https://doi.org/10.1145/2669024.2669029","url":null,"abstract":"The feature film \"Big Hero 6\" is set in a fictional city with numerous scenes encompassing hundreds of buildings. The objects visible inside the windows, especially during nighttime, play a vital role in portraying the realism of the scene. Unfortunately, it can be expensive to individually model each room in every building. Thus, the production team needed a way to render building interiors with reasonable parallax effects, without adding geometry in an already large scene. This paper describes a novel building interior visualization system using a Virtual Window Shader (Shader) written for a ray-traced global illumination (GI) multi-bounce renderer [Eisenacher et al. 2013]. The Shader efficiently creates an illusion of geometry and light sources inside building windows using only pre-baked textures.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125411248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper is dedicated to the 3D reconstruction of thin tubular structures, such as cables or ropes, from a given image sequence. This is a challenging task, mainly because of self-occlusions of the structure and its thin features. We present an approach that combines image processing tools with physics simulation to faithfully reconstruct jumbled and tangled cables in 3D. Our method estimates the topology of the tubular object in the form of a single 1D path and also computes a topology-aware reconstruction of its geometry. We evaluate our method on both synthetic and real datasets and demonstrate that our method favourably compares to state-of-the-art methods.
{"title":"Topology-aware reconstruction of thin tubular structures","authors":"Tobias Martin, Juan Montes, J. Bazin, T. Popa","doi":"10.1145/2669024.2669035","DOIUrl":"https://doi.org/10.1145/2669024.2669035","url":null,"abstract":"This paper is dedicated to the 3D reconstruction of thin tubular structures, such as cables or ropes, from a given image sequence. This is a challenging task, mainly because of self-occlusions of the structure and its thin features. We present an approach that combines image processing tools with physics simulation to faithfully reconstruct jumbled and tangled cables in 3D. Our method estimates the topology of the tubular object in the form of a single 1D path and also computes a topology-aware reconstruction of its geometry. We evaluate our method on both synthetic and real datasets and demonstrate that our method favourably compares to state-of-the-art methods.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"144 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131719942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Stavness, C. A. Sánchez, J. Lloyd, A. Ho, Johnty Wang, S. Fels, Danny Huang
We propose a novel geometric skinning approach that unifies geometric blending for rigid-body models with embedded surfaces for finite-element models. The resulting skinning method provides flexibility for modelers and animators to select the desired dynamic degrees-of-freedom through a combination of coupled rigid and deformable structures connected to a single skin mesh that is influenced by all dynamic components. The approach is particularly useful for anatomical models that include a mix of hard structures (bones) and soft tissues (muscles, tendons). We demonstrate our skinning method for an upper airway model and create first-of-its-kind simulations of swallowing and speech acoustics that are generated by muscle-driven biomechanical models of the oral anatomy.
{"title":"Unified skinning of rigid and deformable models for anatomical simulations","authors":"I. Stavness, C. A. Sánchez, J. Lloyd, A. Ho, Johnty Wang, S. Fels, Danny Huang","doi":"10.1145/2669024.2669031","DOIUrl":"https://doi.org/10.1145/2669024.2669031","url":null,"abstract":"We propose a novel geometric skinning approach that unifies geometric blending for rigid-body models with embedded surfaces for finite-element models. The resulting skinning method provides flexibility for modelers and animators to select the desired dynamic degrees-of-freedom through a combination of coupled rigid and deformable structures connected to a single skin mesh that is influenced by all dynamic components. The approach is particularly useful for anatomical models that include a mix of hard structures (bones) and soft tissues (muscles, tendons). We demonstrate our skinning method for an upper airway model and create first-of-its-kind simulations of swallowing and speech acoustics that are generated by muscle-driven biomechanical models of the oral anatomy.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133750565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual point light (VPL) [Keller 1997] based global illumination methods are well established for interactive applications, but they have considerable problems such as spiky artifacts and temporal flickering caused by singularities, high-frequency materials, and discontinuous geometries (Fig. 1). This paper proposes an efficient technique to render one-bounce interreflections for all-frequency materials based on virtual spherical lights (VSLs) [Hašan et al. 2009]. VSLs were proposed to suppress spiky artifacts of VPLs. However, this is unsuitable for real-time applications, since it needs expensive Monte-Carlo (MC) integration and k-nearest neighbor density estimation for each VSL. This paper approximates VSLs using spherical Gaussian (SG) lights without singularities, which take all-frequency materials into account. Instead of k-nearest neighbor density estimation, this paper presents a simple SG lights generation technique using mipmap filtering which alleviates temporal flickering for high-frequency geometries and textures (e.g., normal maps) at real-time frame rates. Since SG lights based approximations are inconsistent estimators, this paper additionally discusses a consistent bias reduction technique. Our technique is simple, easy to integrate in existing reflective shadow map (RSM) based implementations, and completely dynamic for one-bounce indirect illumination including caustics.
{"title":"Virtual spherical gaussian lights for real-time glossy indirect illumination","authors":"Yusuke Tokuyoshi","doi":"10.1145/2669024.2669025","DOIUrl":"https://doi.org/10.1145/2669024.2669025","url":null,"abstract":"Virtual point light (VPL) [Keller 1997] based global illumination methods are well established for interactive applications, but they have considerable problems such as spiky artifacts and temporal flickering caused by singularities, high-frequency materials, and discontinuous geometries (Fig. 1). This paper proposes an efficient technique to render one-bounce interreflections for all-frequency materials based on virtual spherical lights (VSLs) [Hašan et al. 2009]. VSLs were proposed to suppress spiky artifacts of VPLs. However, this is unsuitable for real-time applications, since it needs expensive Monte-Carlo (MC) integration and k-nearest neighbor density estimation for each VSL. This paper approximates VSLs using spherical Gaussian (SG) lights without singularities, which take all-frequency materials into account. Instead of k-nearest neighbor density estimation, this paper presents a simple SG lights generation technique using mipmap filtering which alleviates temporal flickering for high-frequency geometries and textures (e.g., normal maps) at real-time frame rates. Since SG lights based approximations are inconsistent estimators, this paper additionally discusses a consistent bias reduction technique. Our technique is simple, easy to integrate in existing reflective shadow map (RSM) based implementations, and completely dynamic for one-bounce indirect illumination including caustics.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133629920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shaohui Jiao, Wen Wu, Haitao Wang, Mingcai Zhou, Tao Hong, Xun Sun, E. Wu
Integral imaging (II) display provides a promising 3D display technology for users to see natural 3D color images with stereo and motion parallax. However, they often suffers from the limitations of both the insufficient spatial resolution and lack of real-time content generation strategies. In this paper, we advance the traditional II display with an efficient sub-pixel based light field reconstruction scheme, to achieve 3D imagery with much higher spatial resolution in real-time speed.
{"title":"Real time light field reconstruction for sub-pixel based integral imaging display","authors":"Shaohui Jiao, Wen Wu, Haitao Wang, Mingcai Zhou, Tao Hong, Xun Sun, E. Wu","doi":"10.1145/2669024.2669041","DOIUrl":"https://doi.org/10.1145/2669024.2669041","url":null,"abstract":"Integral imaging (II) display provides a promising 3D display technology for users to see natural 3D color images with stereo and motion parallax. However, they often suffers from the limitations of both the insufficient spatial resolution and lack of real-time content generation strategies. In this paper, we advance the traditional II display with an efficient sub-pixel based light field reconstruction scheme, to achieve 3D imagery with much higher spatial resolution in real-time speed.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129590124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a new post-processing method for rendering high-quality depth-of-field effects in real time. Our method is based on a recursive filtering process, which adaptively smooths the image frame with local depth and circle of confusion information. Unlike previous post-filtering approaches that rely on various convolution kernels, the behavior of our filter is controlled by a weighting function defined between two neighboring pixels. By properly designing this weighting function, our method produces spatially-varying smoothed results, correctly handles the boundaries between in-focus and out-of-focus objects, and avoids rendering artifacts such as intensity leakage and blurring discontinuity. Additionally, our method works on the full frame without resorting to image pyramids. Our algorithms runs efficiently on graphics hardware. We demonstrate the effectiveness of the proposed method with several complex scenes.
{"title":"Depth of field rendering via adaptive recursive filtering","authors":"Shibiao Xu, Xing Mei, Weiming Dong, Xun Sun, Xukun Shen, Xiaopeng Zhang","doi":"10.1145/2669024.2669034","DOIUrl":"https://doi.org/10.1145/2669024.2669034","url":null,"abstract":"We present a new post-processing method for rendering high-quality depth-of-field effects in real time. Our method is based on a recursive filtering process, which adaptively smooths the image frame with local depth and circle of confusion information. Unlike previous post-filtering approaches that rely on various convolution kernels, the behavior of our filter is controlled by a weighting function defined between two neighboring pixels. By properly designing this weighting function, our method produces spatially-varying smoothed results, correctly handles the boundaries between in-focus and out-of-focus objects, and avoids rendering artifacts such as intensity leakage and blurring discontinuity. Additionally, our method works on the full frame without resorting to image pyramids. Our algorithms runs efficiently on graphics hardware. We demonstrate the effectiveness of the proposed method with several complex scenes.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114210375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kazuhiro Sato, Yusuke Matsui, T. Yamasaki, K. Aizawa
Manga (Japanese comics) are popular all over the world. However, most existing manga are monochrome. If such monochrome manga can be colorized, readers can enjoy the richer representations. In this paper, we propose a semiautomatic colorization method for manga. Given a previously colored reference manga image and target monochrome manga images, we propagate the colors of the reference manga to the target manga by representing images as graphs and matching graphs. The proposed method enables coloring of manga images without time-consuming manual colorization. We show results in which the colors of characters were correctly transferred to target characters, even those with complex structures.
{"title":"Reference-based manga colorization by graph correspondence using quadratic programming","authors":"Kazuhiro Sato, Yusuke Matsui, T. Yamasaki, K. Aizawa","doi":"10.1145/2669024.2669037","DOIUrl":"https://doi.org/10.1145/2669024.2669037","url":null,"abstract":"Manga (Japanese comics) are popular all over the world. However, most existing manga are monochrome. If such monochrome manga can be colorized, readers can enjoy the richer representations. In this paper, we propose a semiautomatic colorization method for manga. Given a previously colored reference manga image and target monochrome manga images, we propagate the colors of the reference manga to the target manga by representing images as graphs and matching graphs. The proposed method enables coloring of manga images without time-consuming manual colorization. We show results in which the colors of characters were correctly transferred to target characters, even those with complex structures.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123762796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}