Yong Zhang, Weiming Dong, O. Deussen, Feiyue Huang, Kexin Li, Bao-Gang Hu
This paper presents a data-driven framework for generating cartoon-like facial representations from a given portrait image. We solve our problem by an optimization that simultaneously considers a desired artistic style, image-cartoon relationships of facial components as well as automatic adjustment of the image composition. The stylization operation consists of two steps: a face parsing step to localize and extract facial components from the input image; a cartoon generation step to cartoonize the face according to the extracted information. The components of the cartoon are assembled from a database of stylized facial components. Quantifying the similarity between facial components of input and cartoon is done by image feature matching. We incorporate prior knowledge about photo-cartoon relationships and the optimal composition of cartoon facial components extracted from a set of cartoon faces to maintain a natural and attractive look of the results.
{"title":"Data-driven face cartoon stylization","authors":"Yong Zhang, Weiming Dong, O. Deussen, Feiyue Huang, Kexin Li, Bao-Gang Hu","doi":"10.1145/2669024.2669028","DOIUrl":"https://doi.org/10.1145/2669024.2669028","url":null,"abstract":"This paper presents a data-driven framework for generating cartoon-like facial representations from a given portrait image. We solve our problem by an optimization that simultaneously considers a desired artistic style, image-cartoon relationships of facial components as well as automatic adjustment of the image composition. The stylization operation consists of two steps: a face parsing step to localize and extract facial components from the input image; a cartoon generation step to cartoonize the face according to the extracted information. The components of the cartoon are assembled from a database of stylized facial components. Quantifying the similarity between facial components of input and cartoon is done by image feature matching. We incorporate prior knowledge about photo-cartoon relationships and the optimal composition of cartoon facial components extracted from a set of cartoon faces to maintain a natural and attractive look of the results.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125668299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When creating a physical model to 3D print, the density distribution of an object is important because it determines the mass properties of objects such as center of mass, total mass and moment of inertia. In this paper, we present a density aware shape modelling method to control the mass properties of 3D printed objects. We generate a continuous density distribution that satisfies the given mass properties and generate a 3D printable model that represents this density distribution using a truss structure. The number of nodes and their positions are iteratively optimized so as to minimize error between the target density and the density of the truss structure. With our technique, 3D printed objects that have desired mass properties can be fabricated.
{"title":"Density aware shape modeling to control mass properties of 3D printed objects","authors":"Daiki Yamanaka, Hiromasa Suzuki, Y. Ohtake","doi":"10.1145/2669024.2669040","DOIUrl":"https://doi.org/10.1145/2669024.2669040","url":null,"abstract":"When creating a physical model to 3D print, the density distribution of an object is important because it determines the mass properties of objects such as center of mass, total mass and moment of inertia. In this paper, we present a density aware shape modelling method to control the mass properties of 3D printed objects. We generate a continuous density distribution that satisfies the given mass properties and generate a 3D printable model that represents this density distribution using a truss structure. The number of nodes and their positions are iteratively optimized so as to minimize error between the target density and the density of the truss structure. With our technique, 3D printed objects that have desired mass properties can be fabricated.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122398681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present a method to construct the efficient wavelet transform based on the matrix-valued Loop subdivision. The new wavelet transforms inherits the advantages of the matrix-valued subdivision and offers the good shape preserving ability. By adopting the local lifting scheme, it is efficient and uses less memory. Our experiments showed that the proposed wavelet transform is sufficiently stable and the fitting quality of resulted surfaces is good.
{"title":"The subdivision wavelet transform with local shape control","authors":"Chong Zhao, Hanqiu Sun","doi":"10.1145/2669024.2669038","DOIUrl":"https://doi.org/10.1145/2669024.2669038","url":null,"abstract":"In this paper, we present a method to construct the efficient wavelet transform based on the matrix-valued Loop subdivision. The new wavelet transforms inherits the advantages of the matrix-valued subdivision and offers the good shape preserving ability. By adopting the local lifting scheme, it is efficient and uses less memory. Our experiments showed that the proposed wavelet transform is sufficiently stable and the fitting quality of resulted surfaces is good.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130298154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Splashing occurs when a liquid drop hits the solid or fluid surface at a high velocity. The drop after the impact spreads and forms a corona with a thickened rim, which first develops annular undulations and then breaks into secondary droplets. We have many chances to see splashes in our daily life, e.g., milk crown, splashing paint, and raindrops falling onto a pool, whose characteristics of deformation have a significant impact on the visual reality of the phenomena. Many experimental studies have been conducted to find criteria on when splashing would occur, but the physical mechanisms of splashing are still not completely understood. It was only recently discovered that ambient gas pressure is a principal factor for creating such a splash. In this paper, therefore, we newly incorporate the ambient gas pressure effect into the Navier-Stokes equations through SPH fluid simulation for representing more accurate splashing dynamics. Our experiments demonstrated that the new approach requires very little additional computing cost to capture realistic liquid behaviors like fingering, which have not previously been attained by SPH nor most schemes for fluid simulation.
{"title":"Splashing liquids with ambient gas pressure","authors":"Kazuhide Ueda, I. Fujishiro","doi":"10.1145/2669024.2669036","DOIUrl":"https://doi.org/10.1145/2669024.2669036","url":null,"abstract":"Splashing occurs when a liquid drop hits the solid or fluid surface at a high velocity. The drop after the impact spreads and forms a corona with a thickened rim, which first develops annular undulations and then breaks into secondary droplets. We have many chances to see splashes in our daily life, e.g., milk crown, splashing paint, and raindrops falling onto a pool, whose characteristics of deformation have a significant impact on the visual reality of the phenomena. Many experimental studies have been conducted to find criteria on when splashing would occur, but the physical mechanisms of splashing are still not completely understood. It was only recently discovered that ambient gas pressure is a principal factor for creating such a splash. In this paper, therefore, we newly incorporate the ambient gas pressure effect into the Navier-Stokes equations through SPH fluid simulation for representing more accurate splashing dynamics. Our experiments demonstrated that the new approach requires very little additional computing cost to capture realistic liquid behaviors like fingering, which have not previously been attained by SPH nor most schemes for fluid simulation.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115435256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a new interface for augmenting e-learning videos with panoramic frames and content-based navigation. Our interface gradually builds a panoramic video, and allows users to navigate through such video by directly clicking on its contents, as opposed to using a conventional time slider. We demonstrate the effectiveness of our approach by successfully applying it to three representative styles of e-learning videos: Khan Academy, Coursera, and conventional lecture recorded with a camera. The techniques described provide more efficient ways for exploring the benefits of e-learning videos. As such, they have the potential to impact education by providing more customizable learning experiences for millions of e-learners around the world.
{"title":"Panoramic e-learning videos for non-linear navigation","authors":"Rosália G. Schneider, M. M. O. Neto","doi":"10.1145/2669024.2669027","DOIUrl":"https://doi.org/10.1145/2669024.2669027","url":null,"abstract":"We introduce a new interface for augmenting e-learning videos with panoramic frames and content-based navigation. Our interface gradually builds a panoramic video, and allows users to navigate through such video by directly clicking on its contents, as opposed to using a conventional time slider. We demonstrate the effectiveness of our approach by successfully applying it to three representative styles of e-learning videos: Khan Academy, Coursera, and conventional lecture recorded with a camera. The techniques described provide more efficient ways for exploring the benefits of e-learning videos. As such, they have the potential to impact education by providing more customizable learning experiences for millions of e-learners around the world.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125774751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a novel 3D shape surface deformation method with local and nonlocal guidance. It is important to deform a mesh while preserving the global shape and local properties. Previous methods generally deform a surface according to the local geometric affinity, which leads to artifacts such as local and global shape distortion. Instead, our approach uses the locally linear embedding (LLE) to construct the nonlocal relationship for each vertex and its nonlocal neighbors in a geometric feature space, and uses a well known local neighborhood coherence to represent the local relationship. We then couple these two local and nonlocal guidance together to propagate the local deformation over the whole surface while maintaining these two relationships. The nonlocal guidance essentially preserves the global shape and the local guidance maintains the local properties, and these two guidance complements each other when propagating the deformation. Our method can be extended for mesh merging. Experimental results on various models demonstrate the effectiveness of our method.
{"title":"Local and nonlocal guidance coupled surface deformation","authors":"Yufeng Tang, Dongqing Zou, Jianwei Li, Xiaowu Chen","doi":"10.1145/2669024.2669030","DOIUrl":"https://doi.org/10.1145/2669024.2669030","url":null,"abstract":"This paper presents a novel 3D shape surface deformation method with local and nonlocal guidance. It is important to deform a mesh while preserving the global shape and local properties. Previous methods generally deform a surface according to the local geometric affinity, which leads to artifacts such as local and global shape distortion. Instead, our approach uses the locally linear embedding (LLE) to construct the nonlocal relationship for each vertex and its nonlocal neighbors in a geometric feature space, and uses a well known local neighborhood coherence to represent the local relationship. We then couple these two local and nonlocal guidance together to propagate the local deformation over the whole surface while maintaining these two relationships. The nonlocal guidance essentially preserves the global shape and the local guidance maintains the local properties, and these two guidance complements each other when propagating the deformation. Our method can be extended for mesh merging. Experimental results on various models demonstrate the effectiveness of our method.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"189 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124313550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jing Liu, Soja-Marie Morgens, RobertC Sumner, Luke Buschmann, Yu Zhang, James Davis
The emergence of high frame rate computational displays has created an opportunity for viewing experiences impossible on traditional displays. These displays can create views personalized to multiple users, encode hidden messages, or even decompose and encode a targeted light field to create glasses-free 3D views [Masia et al. 2013].
高帧率计算显示器的出现为传统显示器无法实现的观看体验创造了机会。这些显示器可以为多个用户创建个性化视图,编码隐藏信息,甚至分解和编码目标光场以创建无需眼镜的3D视图[Masia et al. 2013]。
{"title":"When does the hidden butterfly not flicker?","authors":"Jing Liu, Soja-Marie Morgens, RobertC Sumner, Luke Buschmann, Yu Zhang, James Davis","doi":"10.1145/2669024.2669026","DOIUrl":"https://doi.org/10.1145/2669024.2669026","url":null,"abstract":"The emergence of high frame rate computational displays has created an opportunity for viewing experiences impossible on traditional displays. These displays can create views personalized to multiple users, encode hidden messages, or even decompose and encode a targeted light field to create glasses-free 3D views [Masia et al. 2013].","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124391930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandru Dancu, M. Fourgeaud, Zlatko Franjcic, R. Avetisyan
In this paper we describe experiments in which we acquire range images of underwater surfaces with four types of depth sensors and attempt to reconstruct underwater surfaces. Two conditions are tested: acquiring range images by submersing the sensors and by holding the sensors over the water line and recording through water. We found out that only the Kinect sensor is able to acquire depth images of submersed surfaces by holding the sensor above water. We compare the reconstructed underwater geometry with meshes obtained when the surfaces were not submersed. These findings show that 3D underwater reconstruction using depth sensors is possible, despite the high water absorption of the near infrared spectrum in which these sensors operate.
{"title":"Underwater reconstruction using depth sensors","authors":"Alexandru Dancu, M. Fourgeaud, Zlatko Franjcic, R. Avetisyan","doi":"10.1145/2669024.2669042","DOIUrl":"https://doi.org/10.1145/2669024.2669042","url":null,"abstract":"In this paper we describe experiments in which we acquire range images of underwater surfaces with four types of depth sensors and attempt to reconstruct underwater surfaces. Two conditions are tested: acquiring range images by submersing the sensors and by holding the sensors over the water line and recording through water. We found out that only the Kinect sensor is able to acquire depth images of submersed surfaces by holding the sensor above water. We compare the reconstructed underwater geometry with meshes obtained when the surfaces were not submersed. These findings show that 3D underwater reconstruction using depth sensors is possible, despite the high water absorption of the near infrared spectrum in which these sensors operate.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130717675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Syuhei Sato, Y. Dobashi, Kei Iwasaki, Tsuyoshi Yamamoto, T. Nishita
Recently, visual simulation of fluids has become an important element in many applications, such as movies and computer games. These fluid animations are usually created by physically-based fluid simulation. However, the simulation often requires very expensive computational cost for creating realistic fluid animations. Therefore, when the user tries to create various fluid animations, he or she must execute fluid simulation repeatedly, which requires a prohibitive computational time. To address this problem, this paper proposes a method for deforming velocity fields of fluids while preserving the divergence-free condition. In this paper, we focus on grid-based 2D fluid simulations. Our system allows the user to interactively create various fluid animations from a single set of velocity fields generated by the fluid simulation. In a preprocess, our method converts the input velocity fields into scalar fields representing the stream functions. At run-time, the user deforms the grid representing the scalar stream functions and the deformed velocity fields are then obtained by applying a curl operator to the deformed scalar stream functions. The velocity fields obtained by this process naturally perseveres the divergence-free condition. For the deformation of the grid, we use a method based on Moving Least Squares. The usefulness of our method is demonstrated by several examples.
{"title":"Deformation of 2D flow fields using stream functions","authors":"Syuhei Sato, Y. Dobashi, Kei Iwasaki, Tsuyoshi Yamamoto, T. Nishita","doi":"10.1145/2669024.2669039","DOIUrl":"https://doi.org/10.1145/2669024.2669039","url":null,"abstract":"Recently, visual simulation of fluids has become an important element in many applications, such as movies and computer games. These fluid animations are usually created by physically-based fluid simulation. However, the simulation often requires very expensive computational cost for creating realistic fluid animations. Therefore, when the user tries to create various fluid animations, he or she must execute fluid simulation repeatedly, which requires a prohibitive computational time. To address this problem, this paper proposes a method for deforming velocity fields of fluids while preserving the divergence-free condition. In this paper, we focus on grid-based 2D fluid simulations. Our system allows the user to interactively create various fluid animations from a single set of velocity fields generated by the fluid simulation. In a preprocess, our method converts the input velocity fields into scalar fields representing the stream functions. At run-time, the user deforms the grid representing the scalar stream functions and the deformed velocity fields are then obtained by applying a curl operator to the deformed scalar stream functions. The velocity fields obtained by this process naturally perseveres the divergence-free condition. For the deformation of the grid, we use a method based on Moving Least Squares. The usefulness of our method is demonstrated by several examples.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128287106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reproducing the writing process of ancient handwritten artworks is a popular way to appreciating and learning the expert skills of Chinese calligraphy. This paper presents a system for reappearing the writing processes of calligraphic characters in different styles. In order to convey the accurate brush skill inside a stroke, a calligraphic character is first decomposed into several strokes, then the writing trajectory and footprint data of each stroke are calculated based on the edge and skeleton, which reveal the relations between shape description and writing skills, and finally the character can be rendered in the oriental ink style dynamically along the trajectory using our writing rhythm and brush footprint models. Consequently, the animation of calligraphy writing can be produced with both shape and spirit features conveyed ([see PDF]), and thus provides a visual and relax way to the comprehension of the complicated and difficult techniques in Chinese calligraphy.
{"title":"Feature-oriented writing process reproduction of Chinese calligraphic artwork","authors":"Lijie Yang, Tianchen Xu, Xiaoshan Li, E. Wu","doi":"10.1145/2669024.2669032","DOIUrl":"https://doi.org/10.1145/2669024.2669032","url":null,"abstract":"Reproducing the writing process of ancient handwritten artworks is a popular way to appreciating and learning the expert skills of Chinese calligraphy. This paper presents a system for reappearing the writing processes of calligraphic characters in different styles. In order to convey the accurate brush skill inside a stroke, a calligraphic character is first decomposed into several strokes, then the writing trajectory and footprint data of each stroke are calculated based on the edge and skeleton, which reveal the relations between shape description and writing skills, and finally the character can be rendered in the oriental ink style dynamically along the trajectory using our writing rhythm and brush footprint models. Consequently, the animation of calligraphy writing can be produced with both shape and spirit features conveyed ([see PDF]), and thus provides a visual and relax way to the comprehension of the complicated and difficult techniques in Chinese calligraphy.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123533354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}