首页 > 最新文献

SIGGRAPH Asia 2014 Technical Briefs最新文献

英文 中文
Data-driven face cartoon stylization 数据驱动的面部卡通风格
Pub Date : 2014-11-24 DOI: 10.1145/2669024.2669028
Yong Zhang, Weiming Dong, O. Deussen, Feiyue Huang, Kexin Li, Bao-Gang Hu
This paper presents a data-driven framework for generating cartoon-like facial representations from a given portrait image. We solve our problem by an optimization that simultaneously considers a desired artistic style, image-cartoon relationships of facial components as well as automatic adjustment of the image composition. The stylization operation consists of two steps: a face parsing step to localize and extract facial components from the input image; a cartoon generation step to cartoonize the face according to the extracted information. The components of the cartoon are assembled from a database of stylized facial components. Quantifying the similarity between facial components of input and cartoon is done by image feature matching. We incorporate prior knowledge about photo-cartoon relationships and the optimal composition of cartoon facial components extracted from a set of cartoon faces to maintain a natural and attractive look of the results.
本文提出了一个数据驱动的框架,用于从给定的肖像图像中生成类似卡通的面部表征。我们通过同时考虑所需的艺术风格,面部成分的图像-卡通关系以及图像组成的自动调整的优化来解决我们的问题。风格化操作包括两个步骤:人脸解析步骤,从输入图像中定位和提取人脸成分;卡通生成步骤,根据提取的信息对人脸进行卡通化。漫画的组成部分是从一个程式化的面部组成部分的数据库中组装的。通过图像特征匹配来量化输入图像与卡通图像的面部成分之间的相似度。我们结合了关于照片-卡通关系的先验知识和从一组卡通面孔中提取的卡通面部成分的最佳组成,以保持结果的自然和吸引人的外观。
{"title":"Data-driven face cartoon stylization","authors":"Yong Zhang, Weiming Dong, O. Deussen, Feiyue Huang, Kexin Li, Bao-Gang Hu","doi":"10.1145/2669024.2669028","DOIUrl":"https://doi.org/10.1145/2669024.2669028","url":null,"abstract":"This paper presents a data-driven framework for generating cartoon-like facial representations from a given portrait image. We solve our problem by an optimization that simultaneously considers a desired artistic style, image-cartoon relationships of facial components as well as automatic adjustment of the image composition. The stylization operation consists of two steps: a face parsing step to localize and extract facial components from the input image; a cartoon generation step to cartoonize the face according to the extracted information. The components of the cartoon are assembled from a database of stylized facial components. Quantifying the similarity between facial components of input and cartoon is done by image feature matching. We incorporate prior knowledge about photo-cartoon relationships and the optimal composition of cartoon facial components extracted from a set of cartoon faces to maintain a natural and attractive look of the results.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125668299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Density aware shape modeling to control mass properties of 3D printed objects 密度感知形状建模,以控制3D打印对象的质量特性
Pub Date : 2014-11-24 DOI: 10.1145/2669024.2669040
Daiki Yamanaka, Hiromasa Suzuki, Y. Ohtake
When creating a physical model to 3D print, the density distribution of an object is important because it determines the mass properties of objects such as center of mass, total mass and moment of inertia. In this paper, we present a density aware shape modelling method to control the mass properties of 3D printed objects. We generate a continuous density distribution that satisfies the given mass properties and generate a 3D printable model that represents this density distribution using a truss structure. The number of nodes and their positions are iteratively optimized so as to minimize error between the target density and the density of the truss structure. With our technique, 3D printed objects that have desired mass properties can be fabricated.
在为3D打印创建物理模型时,物体的密度分布很重要,因为它决定了物体的质量特性,如质心、总质量和转动惯量。本文提出了一种密度感知形状建模方法来控制3D打印物体的质量特性。我们生成满足给定质量属性的连续密度分布,并生成使用桁架结构表示此密度分布的3D打印模型。迭代优化节点数量和节点位置,使目标密度与桁架结构密度之间的误差最小。通过我们的技术,可以制造出具有所需质量特性的3D打印物体。
{"title":"Density aware shape modeling to control mass properties of 3D printed objects","authors":"Daiki Yamanaka, Hiromasa Suzuki, Y. Ohtake","doi":"10.1145/2669024.2669040","DOIUrl":"https://doi.org/10.1145/2669024.2669040","url":null,"abstract":"When creating a physical model to 3D print, the density distribution of an object is important because it determines the mass properties of objects such as center of mass, total mass and moment of inertia. In this paper, we present a density aware shape modelling method to control the mass properties of 3D printed objects. We generate a continuous density distribution that satisfies the given mass properties and generate a 3D printable model that represents this density distribution using a truss structure. The number of nodes and their positions are iteratively optimized so as to minimize error between the target density and the density of the truss structure. With our technique, 3D printed objects that have desired mass properties can be fabricated.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122398681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
The subdivision wavelet transform with local shape control 基于局部形状控制的细分小波变换
Pub Date : 2014-11-24 DOI: 10.1145/2669024.2669038
Chong Zhao, Hanqiu Sun
In this paper, we present a method to construct the efficient wavelet transform based on the matrix-valued Loop subdivision. The new wavelet transforms inherits the advantages of the matrix-valued subdivision and offers the good shape preserving ability. By adopting the local lifting scheme, it is efficient and uses less memory. Our experiments showed that the proposed wavelet transform is sufficiently stable and the fitting quality of resulted surfaces is good.
本文提出了一种基于矩阵值环细分构造有效小波变换的方法。新小波变换继承了矩阵值细分的优点,具有良好的形状保持能力。采用局部提升方案,效率高,占用内存少。实验表明,所提出的小波变换具有足够的稳定性,得到的曲面拟合质量良好。
{"title":"The subdivision wavelet transform with local shape control","authors":"Chong Zhao, Hanqiu Sun","doi":"10.1145/2669024.2669038","DOIUrl":"https://doi.org/10.1145/2669024.2669038","url":null,"abstract":"In this paper, we present a method to construct the efficient wavelet transform based on the matrix-valued Loop subdivision. The new wavelet transforms inherits the advantages of the matrix-valued subdivision and offers the good shape preserving ability. By adopting the local lifting scheme, it is efficient and uses less memory. Our experiments showed that the proposed wavelet transform is sufficiently stable and the fitting quality of resulted surfaces is good.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130298154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Splashing liquids with ambient gas pressure 在环境气体压力下溅出液体
Pub Date : 2014-11-24 DOI: 10.1145/2669024.2669036
Kazuhide Ueda, I. Fujishiro
Splashing occurs when a liquid drop hits the solid or fluid surface at a high velocity. The drop after the impact spreads and forms a corona with a thickened rim, which first develops annular undulations and then breaks into secondary droplets. We have many chances to see splashes in our daily life, e.g., milk crown, splashing paint, and raindrops falling onto a pool, whose characteristics of deformation have a significant impact on the visual reality of the phenomena. Many experimental studies have been conducted to find criteria on when splashing would occur, but the physical mechanisms of splashing are still not completely understood. It was only recently discovered that ambient gas pressure is a principal factor for creating such a splash. In this paper, therefore, we newly incorporate the ambient gas pressure effect into the Navier-Stokes equations through SPH fluid simulation for representing more accurate splashing dynamics. Our experiments demonstrated that the new approach requires very little additional computing cost to capture realistic liquid behaviors like fingering, which have not previously been attained by SPH nor most schemes for fluid simulation.
当液滴以高速撞击固体或流体表面时,就会发生飞溅。撞击后的液滴扩散并形成一个边缘加厚的日冕,首先形成环状波动,然后分解成二次液滴。在我们的日常生活中,我们有很多机会看到飞溅,例如牛奶冠,飞溅的油漆,雨滴落在水池上,其变形特性对现象的视觉真实性产生了重大影响。许多实验研究已经进行,以找到何时会发生飞溅的标准,但飞溅的物理机制仍然没有完全理解。直到最近才发现,环境气体压力是造成这种飞溅的主要因素。因此,本文通过SPH流体模拟,将环境气体压力效应纳入到Navier-Stokes方程中,以更准确地表示飞溅动力学。我们的实验表明,新方法需要很少的额外计算成本来捕获真实的液体行为,如指指,这是以前SPH和大多数流体模拟方案无法实现的。
{"title":"Splashing liquids with ambient gas pressure","authors":"Kazuhide Ueda, I. Fujishiro","doi":"10.1145/2669024.2669036","DOIUrl":"https://doi.org/10.1145/2669024.2669036","url":null,"abstract":"Splashing occurs when a liquid drop hits the solid or fluid surface at a high velocity. The drop after the impact spreads and forms a corona with a thickened rim, which first develops annular undulations and then breaks into secondary droplets. We have many chances to see splashes in our daily life, e.g., milk crown, splashing paint, and raindrops falling onto a pool, whose characteristics of deformation have a significant impact on the visual reality of the phenomena. Many experimental studies have been conducted to find criteria on when splashing would occur, but the physical mechanisms of splashing are still not completely understood. It was only recently discovered that ambient gas pressure is a principal factor for creating such a splash. In this paper, therefore, we newly incorporate the ambient gas pressure effect into the Navier-Stokes equations through SPH fluid simulation for representing more accurate splashing dynamics. Our experiments demonstrated that the new approach requires very little additional computing cost to capture realistic liquid behaviors like fingering, which have not previously been attained by SPH nor most schemes for fluid simulation.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115435256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Panoramic e-learning videos for non-linear navigation 非线性导航的全景电子学习视频
Pub Date : 2014-11-24 DOI: 10.1145/2669024.2669027
Rosália G. Schneider, M. M. O. Neto
We introduce a new interface for augmenting e-learning videos with panoramic frames and content-based navigation. Our interface gradually builds a panoramic video, and allows users to navigate through such video by directly clicking on its contents, as opposed to using a conventional time slider. We demonstrate the effectiveness of our approach by successfully applying it to three representative styles of e-learning videos: Khan Academy, Coursera, and conventional lecture recorded with a camera. The techniques described provide more efficient ways for exploring the benefits of e-learning videos. As such, they have the potential to impact education by providing more customizable learning experiences for millions of e-learners around the world.
我们引入了一个新的界面,通过全景框架和基于内容的导航来增强电子学习视频。我们的界面逐渐构建了一个全景视频,并允许用户通过直接点击视频内容来浏览视频,而不是使用传统的时间滑块。我们通过成功地将我们的方法应用于三种具有代表性的电子学习视频:可汗学院、Coursera和用摄像机录制的传统讲座,证明了我们方法的有效性。所描述的技术为探索电子学习视频的好处提供了更有效的方法。因此,它们有潜力通过为全世界数百万的电子学习者提供更个性化的学习体验来影响教育。
{"title":"Panoramic e-learning videos for non-linear navigation","authors":"Rosália G. Schneider, M. M. O. Neto","doi":"10.1145/2669024.2669027","DOIUrl":"https://doi.org/10.1145/2669024.2669027","url":null,"abstract":"We introduce a new interface for augmenting e-learning videos with panoramic frames and content-based navigation. Our interface gradually builds a panoramic video, and allows users to navigate through such video by directly clicking on its contents, as opposed to using a conventional time slider. We demonstrate the effectiveness of our approach by successfully applying it to three representative styles of e-learning videos: Khan Academy, Coursera, and conventional lecture recorded with a camera. The techniques described provide more efficient ways for exploring the benefits of e-learning videos. As such, they have the potential to impact education by providing more customizable learning experiences for millions of e-learners around the world.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125774751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Local and nonlocal guidance coupled surface deformation 局部和非局部制导耦合表面变形
Pub Date : 2014-11-24 DOI: 10.1145/2669024.2669030
Yufeng Tang, Dongqing Zou, Jianwei Li, Xiaowu Chen
This paper presents a novel 3D shape surface deformation method with local and nonlocal guidance. It is important to deform a mesh while preserving the global shape and local properties. Previous methods generally deform a surface according to the local geometric affinity, which leads to artifacts such as local and global shape distortion. Instead, our approach uses the locally linear embedding (LLE) to construct the nonlocal relationship for each vertex and its nonlocal neighbors in a geometric feature space, and uses a well known local neighborhood coherence to represent the local relationship. We then couple these two local and nonlocal guidance together to propagate the local deformation over the whole surface while maintaining these two relationships. The nonlocal guidance essentially preserves the global shape and the local guidance maintains the local properties, and these two guidance complements each other when propagating the deformation. Our method can be extended for mesh merging. Experimental results on various models demonstrate the effectiveness of our method.
提出了一种基于局部和非局部制导的三维形状曲面变形方法。在保持全局形状和局部属性的同时变形网格是很重要的。以往的方法一般是根据局部几何亲和力对曲面进行变形,从而导致局部和全局形状畸变等伪影。相反,我们的方法使用局部线性嵌入(LLE)来构建几何特征空间中每个顶点及其非局部邻居的非局部关系,并使用众所周知的局部邻域相干性来表示局部关系。然后,我们将这两个局部和非局部制导耦合在一起,在保持这两种关系的同时将局部变形传播到整个表面。非局部制导本质上保持了整体形状,局部制导保持了局部特性,两者在传播变形时是互补的。我们的方法可以扩展到网格合并。在不同模型上的实验结果证明了该方法的有效性。
{"title":"Local and nonlocal guidance coupled surface deformation","authors":"Yufeng Tang, Dongqing Zou, Jianwei Li, Xiaowu Chen","doi":"10.1145/2669024.2669030","DOIUrl":"https://doi.org/10.1145/2669024.2669030","url":null,"abstract":"This paper presents a novel 3D shape surface deformation method with local and nonlocal guidance. It is important to deform a mesh while preserving the global shape and local properties. Previous methods generally deform a surface according to the local geometric affinity, which leads to artifacts such as local and global shape distortion. Instead, our approach uses the locally linear embedding (LLE) to construct the nonlocal relationship for each vertex and its nonlocal neighbors in a geometric feature space, and uses a well known local neighborhood coherence to represent the local relationship. We then couple these two local and nonlocal guidance together to propagate the local deformation over the whole surface while maintaining these two relationships. The nonlocal guidance essentially preserves the global shape and the local guidance maintains the local properties, and these two guidance complements each other when propagating the deformation. Our method can be extended for mesh merging. Experimental results on various models demonstrate the effectiveness of our method.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"189 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124313550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
When does the hidden butterfly not flicker? 隐藏的蝴蝶什么时候不闪烁?
Pub Date : 2014-11-24 DOI: 10.1145/2669024.2669026
Jing Liu, Soja-Marie Morgens, RobertC Sumner, Luke Buschmann, Yu Zhang, James Davis
The emergence of high frame rate computational displays has created an opportunity for viewing experiences impossible on traditional displays. These displays can create views personalized to multiple users, encode hidden messages, or even decompose and encode a targeted light field to create glasses-free 3D views [Masia et al. 2013].
高帧率计算显示器的出现为传统显示器无法实现的观看体验创造了机会。这些显示器可以为多个用户创建个性化视图,编码隐藏信息,甚至分解和编码目标光场以创建无需眼镜的3D视图[Masia et al. 2013]。
{"title":"When does the hidden butterfly not flicker?","authors":"Jing Liu, Soja-Marie Morgens, RobertC Sumner, Luke Buschmann, Yu Zhang, James Davis","doi":"10.1145/2669024.2669026","DOIUrl":"https://doi.org/10.1145/2669024.2669026","url":null,"abstract":"The emergence of high frame rate computational displays has created an opportunity for viewing experiences impossible on traditional displays. These displays can create views personalized to multiple users, encode hidden messages, or even decompose and encode a targeted light field to create glasses-free 3D views [Masia et al. 2013].","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124391930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Underwater reconstruction using depth sensors 利用深度传感器进行水下重建
Pub Date : 2014-11-24 DOI: 10.1145/2669024.2669042
Alexandru Dancu, M. Fourgeaud, Zlatko Franjcic, R. Avetisyan
In this paper we describe experiments in which we acquire range images of underwater surfaces with four types of depth sensors and attempt to reconstruct underwater surfaces. Two conditions are tested: acquiring range images by submersing the sensors and by holding the sensors over the water line and recording through water. We found out that only the Kinect sensor is able to acquire depth images of submersed surfaces by holding the sensor above water. We compare the reconstructed underwater geometry with meshes obtained when the surfaces were not submersed. These findings show that 3D underwater reconstruction using depth sensors is possible, despite the high water absorption of the near infrared spectrum in which these sensors operate.
在本文中,我们描述了用四种类型的深度传感器获取水下表面距离图像并尝试重建水下表面的实验。测试了两种条件:将传感器浸入水中获取距离图像,以及将传感器置于水线上并通过水进行记录。我们发现,只有Kinect传感器能够通过将传感器放在水面上来获取淹没水面的深度图像。我们将重建的水下几何与表面未被淹没时得到的网格进行了比较。这些发现表明,使用深度传感器进行三维水下重建是可能的,尽管这些传感器在近红外光谱中具有很高的吸水性。
{"title":"Underwater reconstruction using depth sensors","authors":"Alexandru Dancu, M. Fourgeaud, Zlatko Franjcic, R. Avetisyan","doi":"10.1145/2669024.2669042","DOIUrl":"https://doi.org/10.1145/2669024.2669042","url":null,"abstract":"In this paper we describe experiments in which we acquire range images of underwater surfaces with four types of depth sensors and attempt to reconstruct underwater surfaces. Two conditions are tested: acquiring range images by submersing the sensors and by holding the sensors over the water line and recording through water. We found out that only the Kinect sensor is able to acquire depth images of submersed surfaces by holding the sensor above water. We compare the reconstructed underwater geometry with meshes obtained when the surfaces were not submersed. These findings show that 3D underwater reconstruction using depth sensors is possible, despite the high water absorption of the near infrared spectrum in which these sensors operate.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130717675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Deformation of 2D flow fields using stream functions 利用流函数的二维流场变形
Pub Date : 2014-11-24 DOI: 10.1145/2669024.2669039
Syuhei Sato, Y. Dobashi, Kei Iwasaki, Tsuyoshi Yamamoto, T. Nishita
Recently, visual simulation of fluids has become an important element in many applications, such as movies and computer games. These fluid animations are usually created by physically-based fluid simulation. However, the simulation often requires very expensive computational cost for creating realistic fluid animations. Therefore, when the user tries to create various fluid animations, he or she must execute fluid simulation repeatedly, which requires a prohibitive computational time. To address this problem, this paper proposes a method for deforming velocity fields of fluids while preserving the divergence-free condition. In this paper, we focus on grid-based 2D fluid simulations. Our system allows the user to interactively create various fluid animations from a single set of velocity fields generated by the fluid simulation. In a preprocess, our method converts the input velocity fields into scalar fields representing the stream functions. At run-time, the user deforms the grid representing the scalar stream functions and the deformed velocity fields are then obtained by applying a curl operator to the deformed scalar stream functions. The velocity fields obtained by this process naturally perseveres the divergence-free condition. For the deformation of the grid, we use a method based on Moving Least Squares. The usefulness of our method is demonstrated by several examples.
近年来,流体的视觉模拟已成为许多应用的重要元素,如电影和电脑游戏。这些流体动画通常是通过基于物理的流体模拟创建的。然而,模拟通常需要非常昂贵的计算成本来创建逼真的流体动画。因此,当用户尝试创建各种流体动画时,必须反复执行流体模拟,这需要令人望而却步的计算时间。为了解决这一问题,本文提出了一种在保持无散度条件下对流体速度场进行变形的方法。本文主要研究基于网格的二维流体模拟。我们的系统允许用户从流体模拟生成的一组速度场中交互式地创建各种流体动画。在预处理中,我们的方法将输入速度场转换为表示流函数的标量场。在运行时,用户对表示标量流函数的网格进行变形,然后通过对变形的标量流函数应用旋度算子获得变形的速度场。该过程得到的速度场自然保持无散度条件。对于网格的变形,我们采用了一种基于移动最小二乘的方法。几个例子证明了我们方法的有效性。
{"title":"Deformation of 2D flow fields using stream functions","authors":"Syuhei Sato, Y. Dobashi, Kei Iwasaki, Tsuyoshi Yamamoto, T. Nishita","doi":"10.1145/2669024.2669039","DOIUrl":"https://doi.org/10.1145/2669024.2669039","url":null,"abstract":"Recently, visual simulation of fluids has become an important element in many applications, such as movies and computer games. These fluid animations are usually created by physically-based fluid simulation. However, the simulation often requires very expensive computational cost for creating realistic fluid animations. Therefore, when the user tries to create various fluid animations, he or she must execute fluid simulation repeatedly, which requires a prohibitive computational time. To address this problem, this paper proposes a method for deforming velocity fields of fluids while preserving the divergence-free condition. In this paper, we focus on grid-based 2D fluid simulations. Our system allows the user to interactively create various fluid animations from a single set of velocity fields generated by the fluid simulation. In a preprocess, our method converts the input velocity fields into scalar fields representing the stream functions. At run-time, the user deforms the grid representing the scalar stream functions and the deformed velocity fields are then obtained by applying a curl operator to the deformed scalar stream functions. The velocity fields obtained by this process naturally perseveres the divergence-free condition. For the deformation of the grid, we use a method based on Moving Least Squares. The usefulness of our method is demonstrated by several examples.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128287106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Feature-oriented writing process reproduction of Chinese calligraphic artwork 中国书法艺术作品的特色书写过程再现
Pub Date : 2014-11-24 DOI: 10.1145/2669024.2669032
Lijie Yang, Tianchen Xu, Xiaoshan Li, E. Wu
Reproducing the writing process of ancient handwritten artworks is a popular way to appreciating and learning the expert skills of Chinese calligraphy. This paper presents a system for reappearing the writing processes of calligraphic characters in different styles. In order to convey the accurate brush skill inside a stroke, a calligraphic character is first decomposed into several strokes, then the writing trajectory and footprint data of each stroke are calculated based on the edge and skeleton, which reveal the relations between shape description and writing skills, and finally the character can be rendered in the oriental ink style dynamically along the trajectory using our writing rhythm and brush footprint models. Consequently, the animation of calligraphy writing can be produced with both shape and spirit features conveyed ([see PDF]), and thus provides a visual and relax way to the comprehension of the complicated and difficult techniques in Chinese calligraphy.
再现古代手写艺术品的书写过程是欣赏和学习中国书法专业技能的一种流行方式。本文提出了一种再现不同体裁书法汉字书写过程的系统。为了在一个笔画中传达准确的笔法,首先将一个书法汉字分解为几个笔画,然后根据笔画的边缘和骨架计算每个笔画的书写轨迹和笔迹数据,揭示形状描述与书写技巧之间的关系,最后利用我们的书写节奏和笔迹模型,沿着轨迹动态地呈现出汉字的东方水墨风格。因此,书法创作的动画既能传达形体特征,又能传达精神特征([见PDF]),从而为理解中国书法的复杂和困难技术提供了一种视觉和轻松的方式。
{"title":"Feature-oriented writing process reproduction of Chinese calligraphic artwork","authors":"Lijie Yang, Tianchen Xu, Xiaoshan Li, E. Wu","doi":"10.1145/2669024.2669032","DOIUrl":"https://doi.org/10.1145/2669024.2669032","url":null,"abstract":"Reproducing the writing process of ancient handwritten artworks is a popular way to appreciating and learning the expert skills of Chinese calligraphy. This paper presents a system for reappearing the writing processes of calligraphic characters in different styles. In order to convey the accurate brush skill inside a stroke, a calligraphic character is first decomposed into several strokes, then the writing trajectory and footprint data of each stroke are calculated based on the edge and skeleton, which reveal the relations between shape description and writing skills, and finally the character can be rendered in the oriental ink style dynamically along the trajectory using our writing rhythm and brush footprint models. Consequently, the animation of calligraphy writing can be produced with both shape and spirit features conveyed ([see PDF]), and thus provides a visual and relax way to the comprehension of the complicated and difficult techniques in Chinese calligraphy.","PeriodicalId":353683,"journal":{"name":"SIGGRAPH Asia 2014 Technical Briefs","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123533354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
SIGGRAPH Asia 2014 Technical Briefs
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1